481 research outputs found

    Preprint: Open Source Compiling for V1Model RMT Switch: Making Data Center Networking Innovation Accessible

    Full text link
    Very few of the innovations in deep networking have seen data center scale implementation. Because the Data Center network's extreme scale performance requires hardware implementation, which is only accessible to a few. However, the emergence of reconfigurable match-action table (RMT) paradigm-based switches have finally opened up the development life cycle of data plane devices. The P4 language is the dominant language choice for programming these devices. Now, Network operators can implement the desired feature over white box RMT switches. The process involves an innovator writing new algorithms in the P4 language and getting them compiled for the target hardware. However, there is still a roadblock. After designing an algorithm, the P4 program's compilation technology is not fully open-source. Thus, it is very difficult for an average researcher to get deep insight into the performance of his/her innovation when executed at the silicon level. There is no open-source compiler backend available for this purpose. Proprietary compiler backends provided by different hardware vendors are available for this purpose. However, they are closed-source and do not provide access to the internal mapping mechanisms. Which inhibits experimenting with new mapping algorithms and innovative instruction sets for reconfigurable match-action table architecture. This paper describes our work toward an open-source compiler backend for compiling P416 targeted for the V1Model architecture-based programmable switches.Comment: arXiv admin note: substantial text overlap with arXiv:2208.1289

    Parallel architectures and runtime systems co-design for task-based programming models

    Get PDF
    The increasing parallelism levels in modern computing systems has extolled the need for a holistic vision when designing multiprocessor architectures taking in account the needs of the programming models and applications. Nowadays, system design consists of several layers on top of each other from the architecture up to the application software. Although this design allows to do a separation of concerns where it is possible to independently change layers due to a well-known interface between them, it is hampering future systems design as the Law of Moore reaches to an end. Current performance improvements on computer architecture are driven by the shrinkage of the transistor channel width, allowing faster and more power efficient chips to be made. However, technology is reaching physical limitations were the transistor size will not be able to be reduced furthermore and requires a change of paradigm in systems design. This thesis proposes to break this layered design, and advocates for a system where the architecture and the programming model runtime system are able to exchange information towards a common goal, improve performance and reduce power consumption. By making the architecture aware of runtime information such as a Task Dependency Graph (TDG) in the case of dataflow task-based programming models, it is possible to improve power consumption by exploiting the critical path of the graph. Moreover, the architecture can provide hardware support to create such a graph in order to reduce the runtime overheads and making possible the execution of fine-grained tasks to increase the available parallelism. Finally, the current status of inter-node communication primitives can be exposed to the runtime system in order to perform a more efficient communication scheduling, and also creates new opportunities of computation and communication overlap that were not possible before. An evaluation of the proposals introduced in this thesis is provided and a methodology to simulate and characterize the application behavior is also presented.El aumento del paralelismo proporcionado por los sistemas de cómputo modernos ha provocado la necesidad de una visión holística en el diseño de arquitecturas multiprocesador que tome en cuenta las necesidades de los modelos de programación y las aplicaciones. Hoy en día el diseño de los computadores consiste en diferentes capas de abstracción con una interfaz bien definida entre ellas. Las limitaciones de esta aproximación junto con el fin de la ley de Moore limitan el potencial de los futuros computadores. La mayoría de las mejoras actuales en el diseño de los computadores provienen fundamentalmente de la reducción del tamaño del canal del transistor, lo cual permite chips más rápidos y con un consumo eficiente sin apenas cambios fundamentales en el diseño de la arquitectura. Sin embargo, la tecnología actual está alcanzando limitaciones físicas donde no será posible reducir el tamaño de los transistores motivando así un cambio de paradigma en la construcción de los computadores. Esta tesis propone romper este diseño en capas y abogar por un sistema donde la arquitectura y el sistema de tiempo de ejecución del modelo de programación sean capaces de intercambiar información para alcanzar una meta común: La mejora del rendimiento y la reducción del consumo energético. Haciendo que la arquitectura sea consciente de la información disponible en el modelo de programación, como puede ser el grafo de dependencias entre tareas en los modelos de programación dataflow, es posible reducir el consumo energético explotando el camino critico del grafo. Además, la arquitectura puede proveer de soporte hardware para crear este grafo con el objetivo de reducir el overhead de construir este grado cuando la granularidad de las tareas es demasiado fina. Finalmente, el estado de las comunicaciones entre nodos puede ser expuesto al sistema de tiempo de ejecución para realizar una mejor planificación de las comunicaciones y creando nuevas oportunidades de solapamiento entre cómputo y comunicación que no eran posibles anteriormente. Esta tesis aporta una evaluación de todas estas propuestas, así como una metodología para simular y caracterizar el comportamiento de las aplicacionesPostprint (published version

    Test Generation and Dependency Analysis for Web Applications

    Get PDF
    In web application testing existing model based web test generators derive test paths from a navigation model of the web application, completed with either manually or randomly generated inputs. Test paths extraction and input generation are handled separately, ignoring the fact that generating inputs for test paths is difficult or even impossible if such paths are infeasible. In this thesis, we propose three directions to mitigate the path infeasibility problem. The first direction uses a search based approach defining novel set of genetic operators that support the joint generation of test inputs and feasible test paths. Results show that such search based approach can achieve higher level of model coverage than existing approaches. Secondly, we propose a novel web test generation algorithm that pre-selects the most promising candidate test cases based on their diversity from previously generated tests. Results of our empirical evaluation show that promoting diversity is beneficial not only to a thorough exploration of the web application behaviours, but also to the feasibility of automatically generated test cases. Moreover, the diversity based approach achieves higher coverage of the navigation model significantly faster than crawling based and search based approaches. The third approach we propose uses a web crawler as a test generator. As such, the generated tests are concrete, hence their navigations among the web application states are feasible by construction. However, the crawling trace cannot be easily turned into a minimal test suite that achieves the same coverage due to test dependencies. Indeed, test dependencies are undesirable in the context of regression testing, preventing the adoption of testing optimization techniques that assume tests to be independent. In this thesis, we propose the first approach to detect test dependencies in a given web test suite by leveraging the information available both in the web test code and on the client side of the web application. Results of our empirical validation show that our approach can effectively and efficiently detect test dependencies and it enables dependency aware formulations of test parallelization and test minimization

    CellSs : Scheduling techniques to better exploit memory hierarchy

    Get PDF
    ABSTRACT: Cell Superscalar’s (CellSs) main goal is to provide a simple, flexible and easy programming approach for the Cell Broadband Engine (Cell/B.E.) that automatically exploits the inherent concurrency of the applications at a task level. The CellSs environment is based on a source-to-source compiler that translates annotated C or Fortran code and a runtime library tailored for the Cell/B.E. that takes care of the concurrent execution of the application. The first efforts for task scheduling in CellSs derived from very simple heuristics. This paper presents new scheduling techniques that have been developed for CellSs for the purpose of improving an application’s performance. Additionally, the design of a new scheduling algorithm is detailed and the algorithm evaluated. The CellSs scheduler takes an extension of the memory hierarchy for Cell/B.E. into account, with a cache memory shared between the SPEs. All new scheduling practices have been evaluated showing better behavior of our system

    New techniques for the analysis of flexible operation of gas turbine based systems

    Get PDF
    In the current European energy market, gas power plants are required to operate in cyclical modes to fill the gaps in renewable energy supply. Renewable sources have dispatch priority due to their relatively low variable operational costs. However, because of their high unpredictability, conventional power plants such as Combined Cycle Power Plants (CCPP) now operate with frequent load changes to fill the gaps in supply by participating in the balancing market. Substantial efforts to develop innovative solutions to the new challenges are invested by the commercial and research community, where investigation into improving understanding of complex part-load operation is of utmost techno-economical importance. To date, main techniques used to simulate part-load operation of CCPPs were developed in the late twentieth century and are based on cumbersome and iterative methods requiring initial approximation of variables. In the wake of recent large scale renewable power installations, these techniques are not effective enough to carry complex optimisation studies to adopt CCPPs to quickly evolving market conditions. A number of improvements have been proposed; however, these modified methods are not able to cope with the required complexity and flexibility of studying various component layout optimisations and their impact on techno-economic performance. The current work pursues a novel method for part-load performance estimation of CCPPs, which is less complex, more effective, and can be seamlessly applied to any further optimisation studies. Initially the technique has been developed based on binarycoded genetic algorithm. The method enables simulation of part-load performance without the need for making initial guess of variables, thus simplifying the procedure. The method has been validated against commercial software showing good agreement in the results. However, it has been concluded that the method does not provide a long term benefit to the research community because it is fundamentally based on search space iterations with unavoidable residual (error) in the solution, and requiring significant computational time. The complex optimisation studies conducted by other authors require a much simpler and flexible method. This led to the development of a novel Direct Solution Method (DSM), which provides a simple solution with zero residual without need for cumbersome iterations. The DSM has been validated against commercial software showing good agreement; thus proving to be a promising alternative to the existing techniques. To improve understanding of part-load gas turbine operation, a set of comprehensive maps have been developed. A Gas Turbine Operational Map allows study and visualisation of complex trade-offs arising from gas turbine load reduction strategies. The load change strategy will determine the life consumption of critical gas turbine components, which led to the development of a Life Consumption Map which takes into account low cycle fatigue and creep mechanisms

    OPTIMIZATION OF RAILWAY TRANSPORTATION HAZMATS AND REGULAR COMMODITIES

    Get PDF
    Transportation of dangerous goods has been receiving more attention in the realm of academic and scientific research during the last few decades as countries have been increasingly becoming industrialized throughout the world, thereby making Hazmats an integral part of our life style. However, the number of scholarly articles in this field is not as many as those of other areas in SCM. Considering the low-probability-and-high-consequence (LPHC) essence of transportation of Hazmats, on the one hand, and immense volume of shipments accounting for more than hundred tons in North America and Europe, on the other, we can safely state that the number of scholarly articles and dissertations have not been proportional to the significance of the subject of interest. On this ground, we conducted our research to contribute towards further developing the domain of Hazmats transportation, and sustainable supply chain management (SSCM), in general terms. Transportation of Hazmats, from logistical standpoint, may include all modes of transport via air, marine, road and rail, as well as intermodal transportation systems. Although road shipment is predominant in most of the literature, railway transportation of Hazmats has proven to be a potentially significant means of transporting dangerous goods with respect to both economies of scale and risk of transportation; these factors, have not just given rise to more thoroughly investigation of intermodal transportation of Hazmats using road and rail networks, but has encouraged the competition between rail and road companies which may indeed have some inherent advantages compared to the other medium due to their infrastructural and technological backgrounds. Truck shipment has ostensibly proven to be providing more flexibility; trains, per contra, provide more reliability in terms of transport risk for conveying Hazmats in bulks. In this thesis, in consonance with the aforementioned motivation, we provide an introduction into the hazardous commodities shipment through rail network in the first chapter of the thesis. Providing relevant statistics on the volume of Hazmat goods, number of accidents, rate of incidents, and rate of fatalities and injuries due to the incidents involving Hazmats, will shed light onto the significance of the topic under study. As well, we review the most pertinent articles while putting more emphasis on the state-of-the-art papers, in chapter two. Following the discussion in chapter 3 and looking at the problem from carrier company’s perspective, a mixed integer quadratically constraint problem (MIQCP) is developed which seeks for the minimization of transportation cost under a set of constraints including those associating with Hazmats. Due to the complexity of the problem, the risk function has been piecewise linearized using a set of auxiliary variables, thereby resulting in an MIP problem. Further, considering the interests of both carrier companies and regulatory agencies, which are minimization of cost and risk, respectively, a multiobjective MINLP model is developed, which has been reduced to an MILP through piecewise linearization of the risk term in the objective function. For both single-objective and multiobjective formulations, model variants with bifurcated and nonbifurcated flows have been presented. Then, in chapter 4, we carry out experiments considering two main cases where the first case presents smaller instances of the problem and the second case focuses on a larger instance of the problem. Eventually, in chapter five, we conclude the dissertation with a summary of the overall discussion as well as presenting some comments on avenues of future work

    Test Case Generation for Object-Oriented Imperative Languages in CLP

    Full text link
    Testing is a vital part of the software development process. Test Case Generation (TCG) is the process of automatically generating a collection of test cases which are applied to a system under test. White-box TCG is usually performed by means of symbolic execution, i.e., instead of executing the program on normal values (e.g., numbers), the program is executed on symbolic values representing arbitrary values. When dealing with an object-oriented (OO) imperative language, symbolic execution becomes challenging as, among other things, it must be able to backtrack, complex heap-allocated data structures should be created during the TCG process and features like inheritance, virtual invocations and exceptions have to be taken into account. Due to its inherent symbolic execution mechanism, we pursue in this paper that Constraint Logic Programming (CLP) has a promising unexploited application field in TCG. We will support our claim by developing a fully CLP-based framework to TCG of an OO imperative language, and by assessing it on a corresponding implementation on a set of challenging Java programs. A unique characteristic of our approach is that it handles all language features using only CLP and without the need of developing specific constraint operators (e.g., to model the heap)

    Acta Cybernetica : Volume 21. Number 4.

    Get PDF
    corecore