8,287 research outputs found

    A Language and Hardware Independent Approach to Quantum-Classical Computing

    Full text link
    Heterogeneous high-performance computing (HPC) systems offer novel architectures which accelerate specific workloads through judicious use of specialized coprocessors. A promising architectural approach for future scientific computations is provided by heterogeneous HPC systems integrating quantum processing units (QPUs). To this end, we present XACC (eXtreme-scale ACCelerator) --- a programming model and software framework that enables quantum acceleration within standard or HPC software workflows. XACC follows a coprocessor machine model that is independent of the underlying quantum computing hardware, thereby enabling quantum programs to be defined and executed on a variety of QPUs types through a unified application programming interface. Moreover, XACC defines a polymorphic low-level intermediate representation, and an extensible compiler frontend that enables language independent quantum programming, thus promoting integration and interoperability across the quantum programming landscape. In this work we define the software architecture enabling our hardware and language independent approach, and demonstrate its usefulness across a range of quantum computing models through illustrative examples involving the compilation and execution of gate and annealing-based quantum programs

    CREOLE: a Universal Language for Creating, Requesting, Updating and Deleting Resources

    Get PDF
    In the context of Service-Oriented Computing, applications can be developed following the REST (Representation State Transfer) architectural style. This style corresponds to a resource-oriented model, where resources are manipulated via CRUD (Create, Request, Update, Delete) interfaces. The diversity of CRUD languages due to the absence of a standard leads to composition problems related to adaptation, integration and coordination of services. To overcome these problems, we propose a pivot architecture built around a universal language to manipulate resources, called CREOLE, a CRUD Language for Resource Edition. In this architecture, scripts written in existing CRUD languages, like SQL, are compiled into Creole and then executed over different CRUD interfaces. After stating the requirements for a universal language for manipulating resources, we formally describe the language and informally motivate its definition with respect to the requirements. We then concretely show how the architecture solves adaptation, integration and coordination problems in the case of photo management in Flickr and Picasa, two well-known service-oriented applications. Finally, we propose a roadmap for future work.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499

    A Comparison of Big Data Frameworks on a Layered Dataflow Model

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models, for which only informal (and often confusing) semantics is generally provided, all share a common underlying model, namely, the Dataflow model. The Dataflow model we propose shows how various tools share the same expressiveness at different levels of abstraction. The contribution of this work is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to understand high-level data-processing applications written in such frameworks. Second, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level.Comment: 19 pages, 6 figures, 2 tables, In Proc. of the 9th Intl Symposium on High-Level Parallel Programming and Applications (HLPP), July 4-5 2016, Muenster, German

    Advances in the Design and Implementation of a Multi-Tier Architecture in the GIPSY Environment

    Get PDF
    We present advances in the software engineering design and implementation of the multi-tier run-time system for the General Intensional Programming System (GIPSY) by further unifying the distributed technologies used to implement the Demand Migration Framework (DMF) in order to streamline distributed execution of hybrid intensional-imperative programs using Java.Comment: 11 pages, 3 figure

    Event detection in location-based social networks

    Get PDF
    With the advent of social networks and the rise of mobile technologies, users have become ubiquitous sensors capable of monitoring various real-world events in a crowd-sourced manner. Location-based social networks have proven to be faster than traditional media channels in reporting and geo-locating breaking news, i.e. Osama Bin Laden’s death was first confirmed on Twitter even before the announcement from the communication department at the White House. However, the deluge of user-generated data on these networks requires intelligent systems capable of identifying and characterizing such events in a comprehensive manner. The data mining community coined the term, event detection , to refer to the task of uncovering emerging patterns in data streams . Nonetheless, most data mining techniques do not reproduce the underlying data generation process, hampering to self-adapt in fast-changing scenarios. Because of this, we propose a probabilistic machine learning approach to event detection which explicitly models the data generation process and enables reasoning about the discovered events. With the aim to set forth the differences between both approaches, we present two techniques for the problem of event detection in Twitter : a data mining technique called Tweet-SCAN and a machine learning technique called Warble. We assess and compare both techniques in a dataset of tweets geo-located in the city of Barcelona during its annual festivities. Last but not least, we present the algorithmic changes and data processing frameworks to scale up the proposed techniques to big data workloads.This work is partially supported by Obra Social “la Caixa”, by the Spanish Ministry of Science and Innovation under contract (TIN2015-65316), by the Severo Ochoa Program (SEV2015-0493), by SGR programs of the Catalan Government (2014-SGR-1051, 2014-SGR-118), Collectiveware (TIN2015-66863-C2-1-R) and BSC/UPC NVIDIA GPU Center of Excellence.We would also like to thank the reviewers for their constructive feedback.Peer ReviewedPostprint (author's final draft

    The SATIN component system - a metamodel for engineering adaptable mobile systems

    Get PDF
    Mobile computing devices, such as personal digital assistants and mobile phones, are becoming increasingly popular, smaller, and more capable. We argue that mobile systems should be able to adapt to changing requirements and execution environments. Adaptation requires the ability-to reconfigure the deployed code base on a mobile device. Such reconfiguration is considerably simplified if mobile applications are component-oriented rather than monolithic blocks of code. We present the SATIN (system adaptation targeting integrated networks) component metamodel, a lightweight local component metamodel that offers the flexible use of logical mobility primitives to reconfigure the software system by dynamically transferring code. The metamodel is implemented in the SATIN middleware system, a component-based mobile computing middleware that uses the mobility primitives defined in the metamodel to reconfigure both itself and applications that it hosts. We demonstrate the suitability of SATIN in terms of lightweightedness, flexibility, and reusability for the creation of adaptable mobile systems by using it to implement, port, and evaluate a number of existing and new applications, including an active network platform developed for satellite communication at the European space agency. These applications exhibit different aspects of adaptation and demonstrate the flexibility of the approach and the advantages gaine

    ASTRI SST-2M prototype and mini-array simulation chain, data reduction software, and archive in the framework of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) is a worldwide project aimed at building the next-generation ground-based gamma-ray observatory. Within the CTA project, the Italian National Institute for Astrophysics (INAF) is developing an end-to-end prototype of the CTA Small-Size Telescopes with a dual-mirror (SST-2M) Schwarzschild-Couder configuration. The prototype, named ASTRI SST-2M, is located at the INAF "M.C. Fracastoro" observing station in Serra La Nave (Mt. Etna, Sicily) and is currently in the scientific and performance validation phase. A mini-array of (at least) nine ASTRI telescopes has been then proposed to be deployed at the Southern CTA site, by means of a collaborative effort carried out by institutes from Italy, Brazil, and South-Africa. The CTA/ASTRI team is developing an end-to-end software package for the reduction of the raw data acquired with both ASTRI SST-2M prototype and mini-array, with the aim of actively contributing to the global ongoing activities for the official data handling system of the CTA observatory. The group is also undertaking a massive Monte Carlo simulation data production using the detector Monte Carlo software adopted by the CTA consortium. Simulated data are being used to validate the simulation chain and evaluate the ASTRI SST-2M prototype and mini-array performance. Both activities are also carried out in the framework of the European H2020-ASTERICS (Astronomy ESFRI and Research Infrastructure Cluster) project. A data archiving system, for both ASTRI SST-2M prototype and mini-array, has been also developed by the CTA/ASTRI team, as a testbed for the scientific archive of CTA. In this contribution, we present the main components of the ASTRI data handling systems and report the status of their development.Comment: Proceedings of the 35th International Cosmic Ray Conference (ICRC 2017), Bexco, Busan, Korea. All CTA contributions at arXiv:1709.0348
    • …
    corecore