541 research outputs found

    An efficient algorithm for minimizing time granularity periodical representations

    Get PDF
    This paper addresses the technical problem of efficiently reducing the periodic representation of a time granularity to its minimal form. The minimization algorithm presented in the paper has an immediate practical application: it allows users to intuitively define granularities (and more generally, recurring events) with algebraic expressions that are then internally translated to mathematical characterizations in terms of minimal periodic sets. Minimality plays a crucial role, since the value of the recurring period has been shown to dominate the complexity when processing periodic sets.

    Supporting Temporal Reasoning by Mapping Calendar Expressions to Minimal Periodic Sets

    Full text link
    In the recent years several research efforts have focused on the concept of time granularity and its applications. A first stream of research investigated the mathematical models behind the notion of granularity and the algorithms to manage temporal data based on those models. A second stream of research investigated symbolic formalisms providing a set of algebraic operators to define granularities in a compact and compositional way. However, only very limited manipulation algorithms have been proposed to operate directly on the algebraic representation making it unsuitable to use the symbolic formalisms in applications that need manipulation of granularities. This paper aims at filling the gap between the results from these two streams of research, by providing an efficient conversion from the algebraic representation to the equivalent low-level representation based on the mathematical models. In addition, the conversion returns a minimal representation in terms of period length. Our results have a major practical impact: users can more easily define arbitrary granularities in terms of algebraic operators, and then access granularity reasoning and other services operating efficiently on the equivalent, minimal low-level representation. As an example, we illustrate the application to temporal constraint reasoning with multiple granularities. From a technical point of view, we propose an hybrid algorithm that interleaves the conversion of calendar subexpressions into periodical sets with the minimization of the period length. The algorithm returns set-based granularity representations having minimal period length, which is the most relevant parameter for the performance of the considered reasoning services. Extensive experimental work supports the techniques used in the algorithm, and shows the efficiency and effectiveness of the algorithm

    Cost-Based Optimization of Integration Flows

    Get PDF
    Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation

    Programming agent-based demographic models with cross-state and message-exchange dependencies: A study with speculative PDES and automatic load-sharing

    Get PDF
    Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal

    Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks

    Full text link
    Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10\% on CIFAR100 with only 6-time steps and 1.07\% on ImageNet while reducing energy usage to 66.9\% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets.Comment: 12 pages, 7 figure

    Sorting Omega Networks Simulated with P Systems: Optimal Data Layouts

    Get PDF
    The paper introduces some sorting networks and their simulation with P systems, in which each processor/membrane can hold more than one piece of data, and perform operations on them internally. Several data layouts are discussed in this context, and an optimal one is proposed, together with its implementation as a P system with dynamic communication graphs

    Multi-flow Optimization via Horizontal Message Queue Partitioning

    Get PDF
    Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as near real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, as well as high requirements for up-to-dateness of analytical query results and data consistency, many instances of integration flows are executed over time. Due to this high load, the performance of the central integration platform is crucial for an IT infrastructure. With the aim of throughput maximization, we propose the concept of multi-flow optimization (MFO). In this approach, messages are collected during a waiting time and executed in batches to optimize sequences of plan instances of a single integration flow. We introduce a horizontal (value-based) partitioning approach for message batch creation and show how to compute the optimal waiting time. This approach significantly reduces the total execution time of a message sequence and hence, it maximizes the throughput, while accepting moderate latency time
    • …
    corecore