410 research outputs found

    Quantifying the benefits of SPECint distant parallelism in simultaneous multithreading architectures

    Get PDF
    We exploit the existence of distant parallelism that future compilers could detect and characterise its performance under simultaneous multithreading architectures. By distant parallelism we mean parallelism that cannot be captured by the processor instruction window and that can produce threads suitable for parallel execution in a multithreaded processor. We show that distant parallelism can make feasible wider issue processors by providing more instructions from the distant threads, thus better exploiting the resources from the processor in the case of speeding up single integer applications. We also investigate the necessity of out-of-order processors in the presence of multiple threads of the same program. It is important to notice at this point that the benefits described are totally orthogonal to any other architectural techniques targeting a single thread.Peer ReviewedPostprint (published version

    COEL: A Web-based Chemistry Simulation Framework

    Get PDF
    The chemical reaction network (CRN) is a widely used formalism to describe macroscopic behavior of chemical systems. Available tools for CRN modelling and simulation require local access, installation, and often involve local file storage, which is susceptible to loss, lacks searchable structure, and does not support concurrency. Furthermore, simulations are often single-threaded, and user interfaces are non-trivial to use. Therefore there are significant hurdles to conducting efficient and collaborative chemical research. In this paper, we introduce a new enterprise chemistry simulation framework, COEL, which addresses these issues. COEL is the first web-based framework of its kind. A visually pleasing and intuitive user interface, simulations that run on a large computational grid, reliable database storage, and transactional services make COEL ideal for collaborative research and education. COEL's most prominent features include ODE-based simulations of chemical reaction networks and multicompartment reaction networks, with rich options for user interactions with those networks. COEL provides DNA-strand displacement transformations and visualization (and is to our knowledge the first CRN framework to do so), GA optimization of rate constants, expression validation, an application-wide plotting engine, and SBML/Octave/Matlab export. We also present an overview of the underlying software and technologies employed and describe the main architectural decisions driving our development. COEL is available at http://coel-sim.org for selected research teams only. We plan to provide a part of COEL's functionality to the general public in the near future.Comment: 23 pages, 12 figures, 1 tabl

    HeTM: Transactional Memory for Heterogeneous Systems

    Full text link
    Modern heterogeneous computing architectures, which couple multi-core CPUs with discrete many-core GPUs (or other specialized hardware accelerators), enable unprecedented peak performance and energy efficiency levels. Unfortunately, though, developing applications that can take full advantage of the potential of heterogeneous systems is a notoriously hard task. This work takes a step towards reducing the complexity of programming heterogeneous systems by introducing the abstraction of Heterogeneous Transactional Memory (HeTM). HeTM provides programmers with the illusion of a single memory region, shared among the CPUs and the (discrete) GPU(s) of a heterogeneous system, with support for atomic transactions. Besides introducing the abstract semantics and programming model of HeTM, we present the design and evaluation of a concrete implementation of the proposed abstraction, which we named Speculative HeTM (SHeTM). SHeTM makes use of a novel design that leverages on speculative techniques and aims at hiding the inherently large communication latency between CPUs and discrete GPUs and at minimizing inter-device synchronization overhead. SHeTM is based on a modular and extensible design that allows for easily integrating alternative TM implementations on the CPU's and GPU's sides, which allows the flexibility to adopt, on either side, the TM implementation (e.g., in hardware or software) that best fits the applications' workload and the architectural characteristics of the processing unit. We demonstrate the efficiency of the SHeTM via an extensive quantitative study based both on synthetic benchmarks and on a porting of a popular object caching system.Comment: The current work was accepted in the 28th International Conference on Parallel Architectures and Compilation Techniques (PACT'19

    IMPLEMENTATION AND EVALUATION OF A GRAPHQL-BASED WEB APPLICATION FOR PROJECT FOLLOW UP

    Get PDF
    This thesis is about APIs and web development. Technologies specifically presented include GraphQL and REST as a basis for the implementation of the web application. The purpose of the thesis was to create a web-based tool for project follow up. The main goal of the tool is to provide reporting functionalities and views for project follow up based on data from the public APIs provided by Toggl. In the first part of the thesis, relevant theory about GraphQL, REST and APIs is provided. Web APIs are presented, along with common protocols, such as HTTP, as well as different data formats for the serialization of the data to be transmitted over the net-work. A literature review is also performed on the current state of the research on GraphQL-based APIs as well as on comparisons on GraphQL-, and RESTful-APIs. The second part consists of the design and implementation of the application. Toggl, a time tracking service, is described with its two different APIs, the Toggl API, and the Report API. Further, the decision process of selecting the technologies for the developed tool is presented. One of the main parts of the tool consists of the synchronization mechanism for keeping the data in the database up to date with the data stored in Toggl. The other major part is about exposing the data via a GraphQL-API and consuming it in a single-page-application created in React. The developed application is a minimum viable product, fulfilling its purpose of providing reporting functionalities for projects based on the data provided by Toggl. It was developed with usability, flexibility and testability in mind enabling further development in the future. GraphQL was the choice of API technology and was considered a suitable approach in this applicatio

    Automatic Cache Aware Roofline Model Building and Validation Using

    Get PDF
    Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016). Sofia (Bulgaria), October, 6-7, 2016.The ever growing complexity of high performance computing systems imposes significant challenges to exploit as much as possible their computational and memory resources. Recently, the Cache-aware Roofline Model has gained popularity due to its simplicity when modeling multi-cores with complex memory hierarchy, characterizing applications bottlenecks, and quantifying achieved or remaining improvements. In this short paper we involve hardware locality topology detection to build the Cache Aware Roofline Model for modern processors in an open-source locality-aware tool. The proposed tool also includes a set of specific micro-benchmarks to assess the micro-architecture performance upper-bounds. The experimental results show that by relying on the proposed tool, it was possible to reach near-theoretical bounds of an Intel 3770K processor, thus proving the effectiveness of the modeling methodology.We would like to acknowledge Action IC1305 (NESUS) for funding this work

    Can open-source projects (re-) shape the SDN/NFV-driven telecommunication market?

    Get PDF
    Telecom network operators face rapidly changing business needs. Due to their dependence on long product cycles they lack the ability to quickly respond to changing user demands. To spur innovation and stay competitive, network operators are investigating technological solutions with a proven track record in other application domains such as open source software projects. Open source software enables parties to learn, use, or contribute to technology from which they were previously excluded. OSS has reshaped many application areas including the landscape of operating systems and consumer software. The paradigmshift in telecommunication systems towards Software-Defined Networking introduces possibilities to benefit from open source projects. Implementing the control part of networks in software enables speedier adaption and innovation, and less dependencies on legacy protocols or algorithms hard-coded in the control part of network devices. The recently proposed concept of Network Function Virtualization pushes the softwarization of telecommunication functionalities even further down to the data plane. Within the NFV paradigm, functionality which was previously reserved for dedicated hardware implementations can now be implemented in software and deployed on generic Commercial Off-The Shelf (COTS) hardware. This paper provides an overview of existing open source initiatives for SDN/NFV-based network architectures, involving infrastructure to orchestration-related functionality. It situates them in a business process context and identifies the pros and cons for the market in general, as well as for individual actors

    Development of a web-based platform for Biomedical Text Mining

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaBiomedical Text Mining (BTM) seeks to derive high-quality information from literature in the biomedical domain, by creating tools/methodologies that can automate time-consuming tasks when searching for new information. This encompasses both Information Retrieval, the discovery and recovery of relevant documents, and Information Extraction, the capability to extract knowledge from text. In the last years, SilicoLife, with the collaboration of the University of Minho, has been developing @Note2, an open-source Java-based multiplatform BTM workbench, including libraries to perform the main BTM tasks, also provid ing user-friendly interfaces through a stand-alone application. This work addressed the development of a web-based software platform that is able to address some of the main tasks within BTM, supported by the existing core libraries from the @Note project. This included the improvement of the available RESTful server, providing some new methods and APIs, and improving others, while also developing a web-based application through calls to the API provided by the server and providing a functional user-friendly web-based interface. This work focused on the development of tasks related with Information Retrieval, addressing the efficient search of relevant documents through an integrated interface. Also, at this stage the aim was to have interfaces to visualize and explore the main entities involved in BTM: queries, documents, corpora, annotation processes entities and resources.A mineração de Literatura Biomédica (BioLM) pretende extrair informação de alta qualidade da área biomédica, através da criação de ferramentas/metodologias que consigam automatizar tarefas com elevado dispêndio de tempo. As tarefas subjacentes vão desde recuperação de informação, descoberta e recuperação de documentos relevantes para a extração de informação pertinente e a capacidade de extrair conhecimento de texto. Nos últimos anos a SilicoLife tem vindo a desenvolver uma ferramenta, o @Note2, uma BioLM Workbench multiplataforma baseada em JAVA, que executa as principais tarefas inerentes a BioLM. Também possui uma versão autónoma com uma interface amigável para o utilizador. Esta tese desenvolveu uma plataforma de software baseada na web, que é capaz de executar algumas das tarefas de BioLM, com suporte num núcleo de bibliotecas do projeto @Note. Para tal foi necessário melhorar o servidor RESTfid atual, criando novos métodos e APIs, como também desenvolver a aplicação baseada na web, com uma interface amigável para o utilizador, que comunicará com o servidor através de chamadas à sua APL Este trabalho focou o seu desenvolvimento em tarefas relacionadas com recuperação de informação, focando na pesquisa eficiente de documentos de interesse através de uma interface integrada. Nesta fase, o objetivo foi também ter um conjunto de interfaces capazes de visualizar e explorar as principais entidades envolvidas em BioLM: pesquisas, documentos, corpora, entidades relacionadas com processos de anotações e recursos

    Scalable and deterministic timing-driven parallel placement for FPGAs

    Full text link

    Low-Cost, Open-Source, and Low-Power: But What to Do With the Data?

    Get PDF
    There are now many ongoing efforts to develop low-cost, open-source, low-power sensors and datalogging solutions for environmental monitoring applications. Many of these have advanced to the point that high quality scientific measurements can be made using relatively inexpensive and increasingly off-the-shelf components. With the development of these innovative systems, however, comes the ability to generate large volumes of high-frequency monitoring data and the challenge of how to log, transmit, store, and share the resulting data. This paper describes a new web application that was designed to enable citizen scientists to stream sensor data from a network of Arduino-based dataloggers to a web-based Data Sharing Portal. This system enables registration of new sensor nodes through a Data Sharing Portal website. Once registered, any Internet connected data-logging device (e.g., connected via cellular or Wi-Fi) can then post data to the portal through a web service application programming interface (API). Data are stored in a back-end data store that implements Version 2 of the Observations Data Model (ODM2). Live data can then be viewed using multiple visualization tools, downloaded from the Data Sharing Portal in a simple text format, or accessed via WaterOneFlow web services for machine-to-machine data exchange. This system was built to support an emerging network of open-source, wireless water quality monitoring stations developed and deployed by the EnviroDIY community for do-it-yourself environmental science and monitoring, initially within the Delaware River Watershed. However, the architecture and components of the ODM2 Data Sharing Portal are generic, open-source, and could be deployed for use with any Internet connected device capable of making measurements and formulating an HTTP POST request

    Modeling Large Compute Nodes with Heterogeneous Memories with Cache-Aware Roofline Model

    Get PDF
    International audienceIn order to fulfill modern applications needs, computing systems become more powerful, heterogeneous and complex. NUMA platforms and emerging high bandwidth memories offer new opportunities for performance improvements. However they also increase hardware and software complexity, thus making application performance analysis and optimization an even harder task. The Cache-Aware Roofline Model (CARM) is an insightful, yet simple model designed to address this issue. It provides feedback on potential applications bottlenecks and shows how far is the application performance from the achievable hardware upper-bounds. However, it does not encompass NUMA systems and next generation processors with heterogeneous memories. Yet, some application bottlenecks belong to those memory subsystems, and would benefit from the CARM insights. In this paper, we fill the missing requirements to scope recent large shared memory systems with the CARM. We provide the methodology to instantiate, and validate the model on a NUMA system as well as on the latest Xeon Phi processor equiped with configurable hybrid memory. Finally, we show the model ability to exhibits several bottlenecks of such systems, which were not supported by CARM
    corecore