9,785 research outputs found

    Spatial inhomogeneities in the sedimentation of biogenic particles in ocean flows: analysis in the Benguela region

    Full text link
    Sedimentation of particles in the ocean leads to inhomogeneous horizontal distributions at depth, even if the release process is homogeneous. We study this phenomenon considering a horizontal sheet of sinking particles immersed in an oceanic flow, and determine how the particles are distributed when they sediment on the seabed (or are collected at a given depth). The study is performed from a Lagrangian viewpoint attending to the properties of the oceanic flow and the physical characteristics (size and density) of typical biogenic sinking particles. Two main processes determine the distribution, the stretching of the sheet caused by the flow and its projection on the surface where particles accumulate. These mechanisms are checked, besides an analysis of their relative importance to produce inhomogeneities, with numerical experiments in the Benguela region. Faster (heavier or larger) sinking particles distribute more homogeneously than slower ones.Comment: 24 pages, 8 figures. To appear in J. Geophys. Res.-Ocean

    A General Framework for Static Profiling of Parametric Resource Usage

    Get PDF
    Traditional static resource analyses estimate the total resource usage of a program, without executing it. In this paper we present a novel resource analysis whose aim is instead the static profiling of accumulated cost, i.e., to discover, for selected parts of the program, an estimate or bound of the resource usage accumulated in each of those parts. Traditional resource analyses are parametric in the sense that the results can be functions on input data sizes. Our static profiling is also parametric, i.e., our accumulated cost estimates are also parameterized by input data sizes. Our proposal is based on the concept of cost centers and a program transformation that allows the static inference of functions that return bounds on these accumulated costs depending on input data sizes, for each cost center of interest. Such information is much more useful to the software developer than the traditional resource usage functions, as it allows identifying the parts of a program that should be optimized, because of their greater impact on the total cost of program executions. We also report on our implementation of the proposed technique using the CiaoPP program analysis framework, and provide some experimental results. This paper is under consideration for acceptance in TPLP.Comment: Paper presented at the 32nd International Conference on Logic Programming (ICLP 2016), New York City, USA, 16-21 October 2016, 22 pages, LaTe

    Towards Energy Consumption Verification via Static Analysis

    Full text link
    In this paper we leverage an existing general framework for resource usage verification and specialize it for verifying energy consumption specifications of embedded programs. Such specifications can include both lower and upper bounds on energy usage, and they can express intervals within which energy usage is to be certified to be within such bounds. The bounds of the intervals can be given in general as functions on input data sizes. Our verification system can prove whether such energy usage specifications are met or not. It can also infer the particular conditions under which the specifications hold. To this end, these conditions are also expressed as intervals of functions of input data sizes, such that a given specification can be proved for some intervals but disproved for others. The specifications themselves can also include preconditions expressing intervals for input data sizes. We report on a prototype implementation of our approach within the CiaoPP system for the XC language and XS1-L architecture, and illustrate with an example how embedded software developers can use this tool, and in particular for determining values for program parameters that ensure meeting a given energy budget while minimizing the loss in quality of service.Comment: Presented at HIP3ES, 2015 (arXiv: 1501.03064

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio

    Material zur spanischen Streikbewegung der letzten Jahre

    Get PDF
    Am 2. 3. 1974 wurde in Barcelona Salvador Puig Antich und in Tarragona der Pole Heinz Chez von der spanischen Exekutive ermordet. Aus der ganzen Welt trafen vor der Ermordung Gnadengesuche bei Franco ein; es gab zahllose Demonstrationen und Solidaritätsaktionen in vielen Ländern, besonders in Frankreich, aber auch in Italien sowohl vor als nach den Ermordungen. In Deutschland fanden ebenfalls in vielen Städten Demonstrationen statt.Bei diesen Verbrechen der Militärjustiz, ebenso wie angesichts des Prozesses gegen zehn Arbeiterführer am 20. 12. 1973 fragt man sich: Wie stark kann die Repression noch wachsen und auf welcher Grundlage? Gibt es die von bürgerlichenBlättern beschworene „Liberalisierung" wirklich? Was sind eigentlich die Bedingungen des Klassenkampfes in Spanien, wie stark ist die Arbeiterbewegung, welche politischen Gruppen sind in ihr vorherrschend

    Towards Data-driven Software-defined Infrastructures

    Get PDF
    Abstract The abundance of computing technologies and devices imply that we will live in a data-driven society in the next years. But this data-driven society requires radically new technologies in the data center to deal with data manipulation, transformation, access control, sharing and placement, among others. We advocate in this paper for a new generation of Software Defined Data Management Infrastructures covering the entire life- cycle of data. On the one hand, this will require new extensible programming abstractions and services for data-management in the data center. On the other hand, this also implies opening up the control plane to data owners outside the data center to manage the data life cycle. We present in this article the open challenges existing in data-driven software defined infrastructures and a use case based on Software Defined Protection of data

    Solving Recurrence Relations using Machine Learning, with Application to Cost Analysis

    Full text link
    Automatic static cost analysis infers information about the resources used by programs without actually running them with concrete data, and presents such information as functions of input data sizes. Most of the analysis tools for logic programs (and other languages) are based on setting up recurrence relations representing (bounds on) the computational cost of predicates, and solving them to find closed-form functions that are equivalent to (or a bound on) them. Such recurrence solving is a bottleneck in current tools: many of the recurrences that arise during the analysis cannot be solved with current solvers, such as Computer Algebra Systems (CASs), so that specific methods for different classes of recurrences need to be developed. We address such a challenge by developing a novel, general approach for solving arbitrary, constrained recurrence relations, that uses machine-learning sparse regression techniques to guess a candidate closed-form function, and a combination of an SMT-solver and a CAS to check whether such function is actually a solution of the recurrence. We have implemented a prototype and evaluated it with recurrences generated by a cost analysis system (the one in CiaoPP). The experimental results are quite promising, showing that our approach can find closed-form solutions, in a reasonable time, for classes of recurrences that cannot be solved by such a system, nor by current CASs.Comment: In Proceedings ICLP 2023, arXiv:2308.1489

    Singularities and undefinitions in the calibration functions of sonic anemometers

    Get PDF
    A mathematical model of the process employed by a sonic anemometer to build up the measured wind vector in a steady flow is presented to illustrate the way the geometry of these sensors as well as the characteristics of aerodynamic disturbance on the acoustic path can lead to singularities in the transformation function that relates the measured (disturbed) wind vector with the real (corrected) wind vector, impeding the application of correction/calibration functions for some wind conditions. An implicit function theorem allows for the identification of those combinations of real wind conditions and design parameters that lead to undefined correction/ calibration functions. In general, orthogonal path sensors do not show problematic combination of parameters. However, some geometric sonic sensor designs, available in the market, with paths forming smaller angles could lead to undefined correction functions for some levels of aerodynamic disturbances and for certain wind directions. The parameters studied have a strong influence on the existence and number of singularities in the correction/ calibration function as well as on the number of singularities for some combination of parameters. Some conclusions concerning good design practices are included
    corecore