6 research outputs found

    Static analysis of concurrrent and distributed systems: concurrent objects and Ethereum Bytecode

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 23-01-2020Hoy en día la concurrencia y la distribución se han convertido en una parte fundamental del proceso de desarrollo de software. Indiscutiblemente, Internet y el uso cada vez más extendido de los procesadores multicore ha influido en el tipo de aplicaciones que se desarrollan. Esto ha dado lugar a la creación de distintos modelos de concurrencia .En particular, uno de los modelos de concurrencia que está ganando importancia es el modelo de objetos concurrentes basado en actores. En este modelo, los objetos (denominados actores) son las unidades de concurrencia. Cada objeto tiene su propio procesador y un estado local. La comunicación entre los mismos se lleva a cabo mediante el paso de mensajes. Cuando un objeto recibe un mensaje puede: actualizar su estado, mandar mensajes o crear nuevos objetos. Es bien sabido que la creación de programas concurrentes correctos es más compleja que la de programas secuenciales ya que es necesario tener en cuenta distintos aspectos inherentes a la concurrencia como los errores asociados a las carreras de datos o a los interbloqueos. Con el n de asegurar el correcto comportamiento de estos programas concurrentes se han desarrollado distintas técnicas de análisis estático y verificación para los diversos modelos de concurrencia existentes...Nowadays concurrency and distribution have become a fundamental part in the softwaredevelopment process. The Internet and the more extended use of multicore processorshave in uenced the type of the applications which are being developed. This has lead tothe creation of several concurrency models. In particular, a concurrency model that isgaining popularity is the actor model, the basis for concurrent objects. In this model,the objects (actors) are the concurrent units. Each object has its own processor and alocal state, and the communication between them is carried out using message passing.In response to receiving a message, an actor can update its local state, send messages orcreate new objects.Developing correct concurrent programs is known to be harder than writing sequentialones because of inherent aspects of concurrency such as data races or deadlocks. To ensurethe correct behavior of concurrent programs, static analyses and verication techniqueshave been developed for the diverse existent concurrency models...Fac. de InformáticaTRUEunpu

    A methodology correlating code optimizations with data memory accesses, execution time and energy consumption

    Get PDF
    The advent of data proliferation and electronic devices gets low execution time and energy consumption software in the spotlight. The key to optimizing software is the correct choice, order as well as parameters of optimization transformations that has remained an open problem in compilation research for decades for various reasons. First, most of the transformations are interdependent and thus addressing them separately is not effective. Second, it is very hard to couple the transformation parameters to the processor architecture (e.g., cache size) and algorithm characteristics (e.g., data reuse); therefore, compiler designers and researchers either do not take them into account at all or do it partly. Third, the exploration space, i.e., the set of all optimization configurations that have to be explored, is huge and thus searching is impractical. In this paper, the above problems are addressed for data-dominant affine loop kernels, delivering significant contributions. A novel methodology is presented reducing the exploration space of six code optimizations by many orders of magnitude. The objective can be execution time (ET), energy consumption (E) or the number of L1, L2 and main memory accesses. The exploration space is reduced in two phases: firstly, by applying a novel register blocking algorithm and a novel loop tiling algorithm and secondly, by computing the maximum and minimum ET/E values for each optimization set. The proposed methodology has been evaluated for both embedded and general-purpose CPUs and for seven well-known algorithms, achieving high memory access, speedup and energy consumption gain values (from 1.17 up to 40) over gcc compiler, hand-written optimized code and Polly. The exploration space from which the near-optimum parameters are selected is reduced from 17 up to 30 orders of magnitude

    A methodology pruning the search space of six compiler transformations by addressing them together as one problem and by exploiting the hardware architecture details

    Get PDF
    Today’s compilers have a plethora of optimizations-transformations to choose from, and the correct choice, order as well parameters of transformations have a significant/large impact on performance; choosing the correct order and parameters of optimizations has been a long standing problem in compilation research, which until now remains unsolved; the separate sub-problems optimization gives a different schedule/binary for each sub-problem and these schedules cannot coexist, as by refining one degrades the other. Researchers try to solve this problem by using iterative compilation techniques but the search space is so big that it cannot be searched even by using modern supercomputers. Moreover, compiler transformations do not take into account the hardware architecture details and data reuse in an efficient way. In this paper, a new iterative compilation methodology is presented which reduces the search space of six compiler transformations by addressing the above problems; the search space is reduced by many orders of magnitude and thus an efficient solution is now capable to be found. The transformations are the following: loop tiling (including the number of the levels of tiling), loop unroll, register allocation, scalar replacement, loop interchange and data array layouts. The search space is reduced (a) by addressing the aforementioned transformations together as one problem and not separately, (b) by taking into account the custom hardware architecture details (e.g., cache size and associativity) and algorithm characteristics (e.g., data reuse). The proposed methodology has been evaluated over iterative compilation and gcc/icc compilers, on both embedded and general purpose processors; it achieves significant performance gains at many orders of magnitude lower compilation time
    corecore