140 research outputs found

    Automating Regression Test Selection for Web Services

    Get PDF
    As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach

    Melhoria das práticas de construção de software: um caso de estudo

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaEm muitos projetos de desenvolvimento de software não são utilizados processos e práticas explícitos com o intuito de garantir a qualidade do produto final. Nesses casos, a organização do ambiente de construção nasce das acções imediatas do dia-a-dia da equipa de desenvolvimento de forma não estruturada e não escalável. No contexto dos projetos de investigação com desenvolvimento de software, em que as equipas são marcadamente mutáveis, a definição de estratégias para o processo de construção de software é essencial para agilizar o desenvolvimento, aumentar a produtividade e controlar a evolução do produto. Este trabalho visa a análise e definição de estratégias para a construção de software usando como caso de estudo o projeto Rede Telemática Saúde (RTS) do Instituto de Engenharia Eletrónica e Telemática de Aveiro, e a sua implementação, através da introdução de boas práticas e ferramentas que permitem melhorar a evolução do sistema. A implementação dessas estratégias inclui disciplinas de gestão de configurações, que asseguram a consistência das versões do projeto e respetivas dependências, e um ambiente de integração contínua que controla todo o código-fonte produzido pela equipa de programadores usando testes automatizados. Cada versão é composta por um conjunto de tarefas ou tópicos atribuídos a cada colaborador que são geridos por critérios de prioridade, alavancando a agilidade do processo de desenvolvimento. Todo o ciclo é representável numa plataforma de gestão dessas tarefas, essencial à gestão de alto nível. Complementarmente, realizou-se um estudo para caracterizar as práticas correntes no processo de construção de software, através de um inquérito à indústria de software portuguesa. As estratégias propostas e implementadas permitiram redefinir o processo de construção no projeto RTS, introduzindo um maior controlo sobre a linha de produção, especialmente na identificação antecipada de defeitos e controlo de versões. Estes resultados estão alinhados com as necessidades prioritárias identificadas no inquérito à indústria.Software projects often neglect the use explicit processes and practices to ensure final product quality are. On those cases, the arrangement of the construction environment arises from pressing needs of the development team daily routine in a non-structured and non-scalable way. In the context of research projects that need software development, in which teams are strongly mutable, the definition of strategies for software construction practices is essential to streamline development, increase productivity and to control the product evolution. This study aims at analyzing and define software construction strategies using as a case study the Rede Telemática Saúde project (RTS) of the Institute of Electronics and Telematics Engineering of Aveiro (IEETA), and their implementation, by introducing best practices and tools that help improving the system evolution. Such strategies include particular topics of configuration management, which ensure consistency of versions and their dependencies, and a continuous integration environment by validating the source-code produced by developers using automated testing. Every version is composed of a set of tasks or topics particularly assigned to each team member and managed by priority criteria to leverage the agility of the development process. Such tasks and the whole development cycle are mapped on a management platform, which is essential to high-level management. Additionally, an industry study was carried out to characterize current practices on software construction process, through a survey to the Portuguese software industry. The proposed and implemented strategies allowed redefining the construction process on the RTS project, introducing more control over the production line, especially on version control and early identification of defects. Those results are aligned with identified priority needs in the industry survey

    The United States Marine Corps Data Collaboration Requirements: Retrieving and Integrating Data From Multiple Databases

    Get PDF
    The goal of this research is to develop an information sharing and database integration model and suggest a framework to fully satisfy the United States Marine Corps collaboration requirements as well as its information sharing and database integration needs. This research is exploratory; it focuses on only one initiative: the IT-21 initiative. The IT-21 initiative dictates The Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st Century Force. The IT-21 initiative states that Navy and Marine Corps information infrastructure will be based largely on commercial systems and services, and the Department of the Navy must ensure that these systems are seamlessly integrated and that information transported over the infrastructure is protected and secure. The Delphi Technique, a qualitative method approach, was used to develop a Holistic Model and to suggest a framework for information sharing and database integration. Data was primarily collected from mid-level to senior information officers, with a focus on Chief Information Officers. In addition, an extensive literature review was conducted to gain insight about known similarities and differences in Strategic Information Management, information sharing strategies, and database integration strategies. It is hoped that the Armed Forces and the Department of Defense will benefit from future development of the information sharing and database integration Holistic Model

    Business optimization through automated signaling design

    Get PDF
    M.Ing. (Engineering Management)Abstract: Railway signaling has become pivotal in the development of railway systems over the years. There is a global demand for upgrading signaling systems for improved efficiency. Upgrading signaling systems requires new signaling designs and modifications to adjacent signaling systems. The purpose of this research is to compare manually produced designs with design automation by covering the framework of multiple aspects of railway signaling designs in view of business optimization using computer drawings, programming software language and management of signaling designs. The research focuses on design automation from the preliminary design stage to the detailed design stage with the intention of investigating and resolving a common project challenge of time management. Various autonomous methods are used to seek improvement on the detailed design phase of re-signaling projects. An analysis on the project’s duration, resources and review cycles is conducted to demonstrate the challenges that are faced during the design of a project. Signaling designs are sophisticated and crucial in an ever-changing railway environment. As a result, there is a demand for efficiency and knowledge within railway signaling to achieve successful completion project target dates. A quantitative approach is used to identify the gaps leading to delays and best practices are applied using a comparative analysis to remediate on any snags that may potentially extend the project duration. The results illustrate that the resources required when automating detailed designs are reduced by two thirds for cable plans and book of circuits and reduced by one third for source documents. Successively, the projects benefit with reduced organizational resources, reduced design durations and reduced design review cycles. This research concludes that software integration of the signaling designs due to the efficiency and innovation of the selected computer drawing software and programming software language such as AutoCAD required less resources for computer drawings that are generated using automation tools compared to computer drawings that are generated manually. The resources required when automating the generation of signaling detailed designs are reduced for cable plans, book of circuits and source documents. This means that the business is optimized by utilizing less resources and subsequently delays are reduced during the design stage

    Design of a horizontally scalable backend application for online games

    Get PDF
    Mobile game market is increasing in popularity year after year, attracting a wide audience of independent developers who must endure the competition of other more resourceful game companies. Players expect high quality games and experiences, while developers strive to monetize. Researches have shown a correlation between some features of a game and its likelihood to succeed and be a potential candidate to enter the top grossing lists. This thesis focuses on identifying the trending features found on the current most successful games, and proposes the design of a scalable, flexible and modular backend application which integrates all the services needed for fulfilling the common needs of a mobile online game. A microservice oriented architecture have been used as a basis for the system design, leading to a modular decomposition of features into small, independent, reusable services. The system and microservices design comply with the Reactive Manifesto, allowing the application to reach responsiveness, elasticity, resiliency and asynchronicity. For its properties, the application is suitable to serve on a cloud environment covering the requirements for small games and popular games with high load of traffic and many concurrent players. The thesis, in addition to the application and microservices design, includes a discussion on the technology stack for a possible implementation and recommended setup for three use case scenarios

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    Test Flakiness Prediction Techniques for Evolving Software Systems

    Get PDF

    8th SC@RUG 2011 proceedings:Student Colloquium 2010-2011

    Get PDF

    8th SC@RUG 2011 proceedings:Student Colloquium 2010-2011

    Get PDF

    Parallel Programming with Global Asynchronous Memory: Models, C++ APIs and Implementations

    Get PDF
    In the realm of High Performance Computing (HPC), message passing has been the programming paradigm of choice for over twenty years. The durable MPI (Message Passing Interface) standard, with send/receive communication, broadcast, gather/scatter, and reduction collectives is still used to construct parallel programs where each communication is orchestrated by the developer-based precise knowledge of data distribution and overheads; collective communications simplify the orchestration but might induce excessive synchronization. Early attempts to bring shared-memory programming model—with its programming advantages—to distributed computing, referred as the Distributed Shared Memory (DSM) model, faded away; one of the main issue was to combine performance and programmability with the memory consistency model. The recently proposed Partitioned Global Address Space (PGAS) model is a modern revamp of DSM that exposes data placement to enable optimizations based on locality, but it still addresses (simple) data- parallelism only and it relies on expensive sharing protocols. We advocate an alternative programming model for distributed computing based on a Global Asynchronous Memory (GAM), aiming to avoid coherency and consistency problems rather than solving them. We materialize GAM by designing and implementing a distributed smart pointers library, inspired by C++ smart pointers. In this model, public and pri- vate pointers (resembling C++ shared and unique pointers, respectively) are moved around instead of messages (i.e., data), thus alleviating the user from the burden of minimizing transfers. On top of smart pointers, we propose a high-level C++ template library for writing applications in terms of dataflow-like networks, namely GAM nets, consisting of stateful processors exchanging pointers in fully asynchronous fashion. We demonstrate the validity of the proposed approach, from the expressiveness perspective, by showing how GAM nets can be exploited to implement both standalone applications and higher-level parallel program- ming models, such as data and task parallelism. As for the performance perspective, preliminary experiments show both close-to-ideal scalability and negligible overhead with respect to state-of-the-art benchmark implementations. For instance, the GAM implementation of a high-quality video restoration filter sustains a 100 fps throughput over 70%-noisy high-quality video streams on a 4-node cluster of Graphics Processing Units (GPUs), with minimal programming effort
    corecore