2,642 research outputs found

    Program Transformations for Asynchronous and Batched Query Submission

    Full text link
    The performance of database/Web-service backed applications can be significantly improved by asynchronous submission of queries/requests well ahead of the point where the results are needed, so that results are likely to have been fetched already when they are actually needed. However, manually writing applications to exploit asynchronous query submission is tedious and error-prone. In this paper we address the issue of automatically transforming a program written assuming synchronous query submission, to one that exploits asynchronous query submission. Our program transformation method is based on data flow analysis and is framed as a set of transformation rules. Our rules can handle query executions within loops, unlike some of the earlier work in this area. We also present a novel approach that, at runtime, can combine multiple asynchronous requests into batches, thereby achieving the benefits of batching in addition to that of asynchronous submission. We have built a tool that implements our transformation techniques on Java programs that use JDBC calls; our tool can be extended to handle Web service calls. We have carried out a detailed experimental study on several real-life applications, which shows the effectiveness of the proposed rewrite techniques, both in terms of their applicability and the performance gains achieved.Comment: 14 page

    Algorithmic patterns for H\mathcal{H}-matrices on many-core processors

    Get PDF
    In this work, we consider the reformulation of hierarchical (H\mathcal{H}) matrix algorithms for many-core processors with a model implementation on graphics processing units (GPUs). H\mathcal{H} matrices approximate specific dense matrices, e.g., from discretized integral equations or kernel ridge regression, leading to log-linear time complexity in dense matrix-vector products. The parallelization of H\mathcal{H} matrix operations on many-core processors is difficult due to the complex nature of the underlying algorithms. While previous algorithmic advances for many-core hardware focused on accelerating existing H\mathcal{H} matrix CPU implementations by many-core processors, we here aim at totally relying on that processor type. As main contribution, we introduce the necessary parallel algorithmic patterns allowing to map the full H\mathcal{H} matrix construction and the fast matrix-vector product to many-core hardware. Here, crucial ingredients are space filling curves, parallel tree traversal and batching of linear algebra operations. The resulting model GPU implementation hmglib is the, to the best of the authors knowledge, first entirely GPU-based Open Source H\mathcal{H} matrix library of this kind. We conclude this work by an in-depth performance analysis and a comparative performance study against a standard H\mathcal{H} matrix library, highlighting profound speedups of our many-core parallel approach

    Product Return Handling

    Get PDF
    In this article we focus on product return handling and warehousingissues. In some businesses return rates can be well over 20% andreturns can be especially costly when not handled properly. In spiteof this, many managers have handled returns extemporarily. The factthat quantitative methods barely exist to support return handlingdecisions adds to this. In this article we bridge those issues by 1)going over the key decisions related with return handling; 2)identifying quantitative models to support those decisions.Furthermore, we provide insights on directions for future research.reverse logistics;decision-making;quantitative models;retailing and warehousing

    System-Level Cascade Detection

    Get PDF

    A Multi-Objective Affinity-Based Savings Algorithm for Improving Processes in Centralized Warehousing Operations

    Get PDF
    Traditional approaches to improving material management processes in warehousing operations tend to focus on one of three major areas: facility design, order picking and sorting, and order batching. In an effort to improve total system savings, a new affinity function is developed and applied to batching logic to create a multi-objective problem. The proposed multi-objective function incorporates user input to increase adaptability to changing demand and flexibility to changing requirements. Computational experience shows the new function leads to solutions that deviate no more than 25% from the most efficient distance based picking route by the same batching logic, while creating savings in the sorting process at the centralized warehouse. The new function reduces savings loss from noncompliance of order pickers through its multi-objective design and is quick to respond to a rapidly changing climate by effective user input. The promising results of the proposed function open the door for additional objectives to be applied to the same logic to expand the objective to include goals like on-time performance

    JUICE BLENDING/BATCHING SYSTEM

    Get PDF
    The food manufacturing industry is always ripe for integration of major improvements in process control and intelligent processing. At this time, even though many new devices and systems are available for implementation in food processing industry, the majority of the routine processing steps are still controlled and performed by workers. The report is on the design and implementation of the logic control functions on a single-product batch process. The aim is to utilize the digital Programmable Logic Controller (PLC) module to control the sequences in a batching process that produces fruit juices hence, making the system fully automated by creating the appropriate ladder diagram or program. Another aim of this project is to control and drive the control valve opening and closing from 0% to 100%. This project also demonstrates the advantages of using PLC in a plant. The batching system is restricted to only four kinds of materials as the based of this case study. The illustrated batching system also includes 5 valves, one agitator motor, one pump and a batching tank. The end product of the batching system should be commercial product of fruit juices. The OMRON PLC is being used for programming, in order to create a ladder diagram or program for the process of this project using the CX-Programmer Version 3.0 ladder diagram language. The PLC device type is the SYSMAC CQM1H-CPU21 with the network type of SYSMAC way

    Managing the Process of Engineering Change Orders: The Case of the Climate Control System in Automobile Development

    Get PDF
    Engineering change orders (ECOs) are part of almost every development process, consuming a significant part of engineering capacity and contributing heavily to development and tool costs. Many companies use a support process to administer ECOs, which fundamentally determines ECO costs. This administrative process encompasses the emergence of a change (e.g., a problem or a market-driven feature change), the management approval of the change, up to the change\u27s final implementation. Despite the tremendous time pressure in development projects in general and in the ECO process in particular, this process can consume several weeks, several months, and in extreme cases even over 1 year. Based on an in-depth case study of the climate control system development in a vehicle, we identify five key contributors to long ECO lead times: a complex approval process, snowballing changes, scarce capacity and congestion, setups and batching, and organizational issues. Based on the case observations, we outline a number of improvement strategies an organization can follow to reduce its ECO lead times

    Explorar performance com Apollo Federation

    Get PDF
    The growing tendency in cloud-hosted computing and availability supported a shift in soft ware architecture to better take advantage of such technological advancements. As Mono lithic Architecture started evolving and maturing, businesses grew their dependency on soft ware solutions which motivated the shift into Microservice Architecture. The same shift is comparable with the evolution of Monolithic GraphQL solutions which, through its growth and evolution, also required a way forward in solving some of its bot tleneck issues. One of the alternatives, already chosen and proven by some enterprises, is GraphQL Federation. Due to its nobility, there is still a lack of knowledge and testing on the performance of GraphQL Federation architecture and what techniques such as caching strategies, batching and execution strategies impact it. This thesis aims to answer this lack of knowledge by first contextualizing the different as pects of GraphQL and GraphQL Federation and investigating the available and documented enterprise scenarios to extract best practices and to better understand how to prepare such performance evaluation. Next, multiple alternatives underwent the Analytic Hierarchy Process to choose the best way to develop a scenario to enable the performance analysis in a standard and structured way. Following this, the alternative base solutions were analysed and compared to deter mine the best fit for the current thesis. Functional and non-functional requirements were collected along with the rest of the design exercise to enhance the solution to be tested for performance. Finally, after the required development and implementation work was documented, the so lution was tested following the Goal Question Metric methodology and utilizing tools such as JMeter, Prometheus and Grafana to collect and visualize the performance data. It was possible to conclude that indeed different caching, batching and execution strategies have an impact on the GraphQL Federation solution. These impacts do shift between positive (improvements in performance) and negative (performance hindered by strategy) for the different tested strategies.A tendência de crescimento da computação cloud-hosted apoiou uma mudança na arquite tura do software para tirar maior proveito desses avanços tecnológicos. Com a evolução e amadurecimento das arquiteturas monolíticas, as empresas aumentaram sua dependência nas soluções software que motivou a mudança e adoção de arquiteturas de micro serviços. O mesmo se verificou com a evolução das soluções monolíticas GraphQL que, com o seu crescimento e evolução, também requeriam soluções para resolver algumas das suas novas complexidades. Uma das alternativas de resolução, já aplicado e provado na indústria, é o GraphQL Federation. Devido ao seu recente lançamento, ainda não existe um conhecimento sólido na performance de uma arquitetura de GraphQL Federation e que técnicas como estratégias de caching, batching e execution tem impacto sobre a mesma. Esta tese tem como intuito responder a esta falha de conhecimento através de, primeira mente, contextualizar os diferentes aspetos de GraphQL e GraphQL Federations com a investigação de casos de aplicação na indústria, para a extração de boas práticas e compreender o necessário ao desenvolvimento de uma avaliação de performance. De seguida, múltiplas alternativas foram sujeitas ao Analytic Hierarchy Process para escolher a melhor forma de desenvolver um cenário/solução necessária a uma análise de performance normalizada e estruturada. Com isto em mente, as duas soluções base foram analisadas e comparadas para determinar a mais adequada a esta tese. Requisitos funcionais e não funcionais foram recolhidos, assim como todo o restante exercício de design necessário ao desenvolvimento da solução para testes de performance. Finalmente, após a fase de desenvolvimento ser concluída e devidamente documentada, a solução foi testada seguindo a metodologia Goal Question Metric, e aplicando ferramentas como JMeter, Prometheus e Grafana para recolher e visualizar os dados de performance. Foi possível concluir que, de facto, as diferentes estratégias de caching, batching e execution tem impacto numa solução GraphQL Federation. Tais impactos variam entre positivos (com melhorias em termos de performance) e negatives (performance afetada por estratégias) para as diferentes estratégias testadas
    corecore