4 research outputs found

    Leveraging Evolutionary Changes for Software Process Quality

    Full text link
    Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues. Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively [...]Comment: Ph.D. Thesis without appended papers, 102 page

    Proposing an alternative allocation algorithm for smartphones: the case of Forall Phones

    Get PDF
    The refurbished market has grown at a considerable pace in recent years. The development of this market is due to the efforts of several countries to move to circular resource consumption that defines a circular economy (Mugge, Jockin, & Bocken, 2017). This is possible by reducing the waste created and reusing materials that would end up in the waste that are used again as resources (European Commission, 2019). This change has been accepted by organizations and their customers who are beginning to see the refurbished market as a great opportunity. Companies see the opportunity to dispose of used product stocks and recover some of their value (Weelden, Bakker, & Mugge, 2016). Customers see the opportunity to buy fully functional products at a fraction of the original price. The development and growth of the refurbished market has been quite noticeable in the smartphone segment, including in Portugal, where Forall Phones has been excelling. It was in the logistics area, in the allocation of smartphones to sales channels, that this project was carried out (a key area for the company's performance), with the aim of improving the allocation of smartphones to sales channels. After mapping the processes, some opportunities for improvement were identified. Then feedback on the approach (designed to respond to opportunities for improvement) was applied and implemented in Excel. The approach was then compared with other approaches being afterwards suggested some recommendations to further improve its performance.O mercado dos recondicionados tem crescido a um ritmo considerável nos últimos anos. O desenvolvimento deste mercado deve-se ao esforço de vários países em mudar para um consumo circular de recursos que é o que define uma economia circular (Mugge, Jockin, & Bocken, 2017). Isto é possível ao reduzir-se o lixo criado e ao reaproveitar materiais que iriam parar ao lixo, voltando a usá-los como recursos (European Comission, 2019). Esta mudança tem sido aceite pelas organizações e pelos seus clientes que começam a ver o mercado dos recondicionados como uma grande oportunidade. As empresas vêem a oportunidade de escoar stocks de produtos usados e de recuperar parte do seu valor (Weelden, Bakker, & Mugge, 2016). Os clientes vêem a oportunidade de comprarem produtos completamente funcionais a uma fração do preço original. O desenvolvimento e crescimento do mercado dos recondicionados tem sido bastante notório no segmento dos smartphones, incluindo em Portugal, onde a Forall Phones tem sobressaído. Foi na área logística que este projecto foi realizado (uma área fulcral para a performance da empresa), sendo o objectivo melhorar a alocação de smartphones aos canais de venda. Após o mapeamento dos processos foram identificadas algumas oportunidades de melhoria. De seguida o feedback relativamente à abordagem (criada para dar resposta às oportunidades de melhoria) foi aplicado e esta foi implementada em Excel. A abordagem foi depois comparada com outras abordagens, sugerindo-se depois recomendações para melhorar ainda mais o seu desempenho

    Stochastic routing optimized for autonomous driving

    Full text link
    In this thesis, we propose a novel algorithm for stochastic routing optimized for autonomous vehicles. The key idea of stochastic routing is to include information on travel time reliability, rather than only estimating travel time itself. Travel time reliability is of major importance for travelers and transportation managers as it simplifies decision making and schedule planning. The concept of stochastic routing is then extended to fit the specific needs for an optimal autonomous drive. In near future, when vehicles enabled with fully autonomous driving become available, the autonomous driving features will only be possible on roads that fulfil certain criteria. Thus, when searching for an optimal route for one origin-destination-pair, we are not only interested in the travel time, but also on the route’s properties concerning autonomous driving. We estimate path travel time reliability by using empirical travel time data on segment-level. For that purpose we dive into the mathematical area of probability theory. First, we measure dependence between road segments. Then we use copulas for estimating travel time distribution on path-level by including the dependence between neighbouring road segments. In order to improve efficiency, which is needed for a real-world application, we use the following hybrid approach. We take convolution, which assumes independence, and extend it to the dependent case by integrating copulas, referred to as copula-based Dependent Discrete Convolution (DDC). Based on DDC we develop a methodology for stochastic routing. We formulate a multicriteria optimization problem, in order to find a route optimized for an autonomous drive. Different approaches to obtain one optimal solution from the Pareto front are compared, and the best fitting one is selected. This framework is then combined with the stochastic routing methodology

    Business process improvement with performance-based sequential experiments

    Full text link
    Various lifecycle approaches to Business Process Management (BPM) have a common assumption that a process is incrementally improved in the redesign phase. While this assumption is hardly questioned in BPM research, there is evidence from the field of AB testing that improvement concepts often do not lead to actual improvements. If incremental process improvement can only be achieved in a fraction of the cases, there is a need to rapidly validate the assumed benefits. Contemporary BPM research does not provide techniques and guidelines on testing and validating the supposed improvements in a fair manner. In this research, we address these challenges by integrating business process execution concepts with ideas from a set of software engineering practices known as DevOps. We propose a business process improvement methodology named AB-BPM, and a set of techniques that allow us to enact the steps in this methodology. As a first technique, we develop a simulation technique that estimates the performance of a new version in an offline setting using historical data of the old version. Since the results of simulation can be speculative, we propose shadow testing as the next step. Our Shadow testing technique partially executes the new version in production alongside the old version in such a way that the new version does not throttle the old version. Finally, we develop techniques that offer AB testing for redesigned processes with immediate feedback at runtime. AB testing compares two versions of a deployed product (e.g., a Web page) by observing users responses to versions A/B, and determines which one performs better. We propose two algorithms, LTAvgR and ProcessBandit, that dynamically adjust request allocation to two versions during the test based on their performance
    corecore