6,877 research outputs found

    Evoplex: A platform for agent-based modeling on networks

    Full text link
    Agent-based modeling and network science have been used extensively to advance our understanding of emergent collective behavior in systems that are composed of a large number of simple interacting individuals or agents. With the increasing availability of high computational power in affordable personal computers, dedicated efforts to develop multi-threaded, scalable and easy-to-use software for agent-based simulations are needed more than ever. Evoplex meets this need by providing a fast, robust and extensible platform for developing agent-based models and multi-agent systems on networks. Each agent is represented as a node and interacts with its neighbors, as defined by the network structure. Evoplex is ideal for modeling complex systems, for example in evolutionary game theory and computational social science. In Evoplex, the models are not coupled to the execution parameters or the visualization tools, and there is a user-friendly graphical interface which makes it easy for all users, ranging from newcomers to experienced, to create, analyze, replicate and reproduce the experiments.Comment: 6 pages, 5 figures; accepted for publication in SoftwareX [software available at https://evoplex.org

    Intelligent business processes composition based on mas, semantic and cloud integration (IPCASCI)

    Get PDF
    [EN]Component reuse is one of the techniques that most clearly contributes to the evolution of the software industry by providing efficient mechanisms to create quality software. Reuse increases both software reliability, due to the fact that it uses previously tested software components, and development productivity, and leads to a clear reduction in cost. Web services have become are an standard for application development on cloud computing environments and are essential in business process development. These services facilitate a software construction that is relatively fast and efficient, two aspects which can be improved by defining suitable models of reuse. This research work is intended to define a model which contains the construction requirements of new services from service composition. To this end, the composition is based on tested Web services and artificial intelligent tools at our disposal. It is believed that a multi-agent architecture based on virtual organizations is a suitable tool to facilitate the construction of cloud computing environments for business processes from other existing environments, and with help from ontological models as well as tools providing the standard BPEL (Business Process Execution Language). In the context of this proposal, we must generate a new business process from the available services in the platform, starting with the requirement specifications that the process should meet. These specifications will be composed of a semi-free description of requirements to describe the new service. The virtual organizations based on a multi-agent system will manage the tasks requiring intelligent behaviour. This system will analyse the input (textual description of the proposal) in order to deconstruct it into computable functionalities, which will be subsequently treated. Web services (or business processes) stored to be reused have been created from the perspective of SOA architectures and associated with an ontological component, which allows the multi-agent system (based on virtual organizations) to identify the services to complete the reuse process. The proposed model develops a service composition by applying a standard BPEL once the services that will compose the solution business process have been identified. This standard allows us to compose Web services in an easy way and provides the advantage of a direct mapping from Business Process Management Notation diagrams

    HSP-Wrap: The Design and Evaluation of Reusable Parallelism for a Subclass of Data-Intensive Applications

    Get PDF
    There is an increasing gap between the rate at which data is generated by scientific and non-scientific fields and the rate at which data can be processed by available computing resources. In this paper, we introduce the fields of Bioinformatics and Cheminformatics; two fields where big data has become a problem due to continuing advances in the technologies that drives these fields: such as gene sequencing and small ligand exploration. We introduce high performance computing as a means to process this growing base of data in order to facilitate knowledge discovery. We enumerate goals of the project including reusability, efficiency, reliability, and scalability. We then describe the implementation of a software scheduler which aims to improve input and output performance of a targeted collection of informatics tools, as well as the profiling and optimization needed to tune the software. We evaluate the performance of the software with a scalability study of the Bioinformatics tools BLAST, HMMER, and MUSCLE; as well as the Cheminformatics tool DOCK6

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Mega-modeling of complex, distributed, heterogeneous CPS systems

    Get PDF
    Model-Driven Design (MDD) has proven to be a powerful technology to address the development of increasingly complex embedded systems. Beyond complexity itself, challenges come from the need to deal with parallelism and heterogeneity. System design must target different execution platforms with different OSs and HW resources, even bare-metal, support local and distributed systems, and integrate on top of these heterogeneous platforms multiple functional component coming from different sources (developed from scratch, legacy code and third-party code), with different behaviors operating under different models of computation and communication. Additionally, system optimization to improve performance, power consumption, cost, etc. requires analyzing huge lists of possible design solutions. Addressing these challenges require flexible design technologies able to support from a single-source model its architectural mapping to different computing resources, of different kind and in different platforms. Traditional MDD methods and tools typically rely on fixed elements, which makes difficult their integration under this variability. For example, it is unlikely to integrate in the same system legacy code with a third-party component. Usually some re-coding is required to enable such interconnection. This paper proposes a UML/MARTE system modeling methodology able to address the challenges mentioned above by improving flexibility and scalability. This approach is illustrated and demonstrated on a flight management system. The model is flexible enough to be adapted to different architectural solutions with a minimal effort by changing its underlying Model of Computation and Communication (MoCC). Being completely platform independent, from the same model it is possible to explore various solutions on different execution platforms.This work has been partially funded by the EU and the Spanish MICINN through the ECSEL MegaMart and Comp4Drones projects and the TEC2017-86722-C4-3-R PLATINO project
    • …
    corecore