3,545 research outputs found

    A Language and Hardware Independent Approach to Quantum-Classical Computing

    Full text link
    Heterogeneous high-performance computing (HPC) systems offer novel architectures which accelerate specific workloads through judicious use of specialized coprocessors. A promising architectural approach for future scientific computations is provided by heterogeneous HPC systems integrating quantum processing units (QPUs). To this end, we present XACC (eXtreme-scale ACCelerator) --- a programming model and software framework that enables quantum acceleration within standard or HPC software workflows. XACC follows a coprocessor machine model that is independent of the underlying quantum computing hardware, thereby enabling quantum programs to be defined and executed on a variety of QPUs types through a unified application programming interface. Moreover, XACC defines a polymorphic low-level intermediate representation, and an extensible compiler frontend that enables language independent quantum programming, thus promoting integration and interoperability across the quantum programming landscape. In this work we define the software architecture enabling our hardware and language independent approach, and demonstrate its usefulness across a range of quantum computing models through illustrative examples involving the compilation and execution of gate and annealing-based quantum programs

    Linguistically-driven framework for computationally efficient and scalable sign recognition

    Full text link
    We introduce a new general framework for sign recognition from monocular video using limited quantities of annotated data. The novelty of the hybrid framework we describe here is that we exploit state-of-the art learning methods while also incorporating features based on what we know about the linguistic composition of lexical signs. In particular, we analyze hand shape, orientation, location, and motion trajectories, and then use CRFs to combine this linguistically significant information for purposes of sign recognition. Our robust modeling and recognition of these sub-components of sign production allow an efficient parameterization of the sign recognition problem as compared with purely data-driven methods. This parameterization enables a scalable and extendable time-series learning approach that advances the state of the art in sign recognition, as shown by the results reported here for recognition of isolated, citation-form, lexical signs from American Sign Language (ASL)

    On the Computation Power of Name Parameterization in Higher-order Processes

    Full text link
    Parameterization extends higher-order processes with the capability of abstraction (akin to that in lambda-calculus), and is known to be able to enhance the expressiveness. This paper focuses on the parameterization of names, i.e. a construct that maps a name to a process, in the higher-order setting. We provide two results concerning its computation capacity. First, name parameterization brings up a complete model, in the sense that it can express an elementary interactive model with built-in recursive functions. Second, we compare name parameterization with the well-known pi-calculus, and provide two encodings between them.Comment: In Proceedings ICE 2015, arXiv:1508.0459

    AMaχoS—Abstract Machine for Xcerpt

    Get PDF
    Web query languages promise convenient and efficient access to Web data such as XML, RDF, or Topic Maps. Xcerpt is one such Web query language with strong emphasis on novel high-level constructs for effective and convenient query authoring, particularly tailored to versatile access to data in different Web formats such as XML or RDF. However, so far it lacks an efficient implementation to supplement the convenient language features. AMaχoS is an abstract machine implementation for Xcerpt that aims at efficiency and ease of deployment. It strictly separates compilation and execution of queries: Queries are compiled once to abstract machine code that consists in (1) a code segment with instructions for evaluating each rule and (2) a hint segment that provides the abstract machine with optimization hints derived by the query compilation. This article summarizes the motivation and principles behind AMaχoS and discusses how its current architecture realizes these principles

    On the Interoperability of DEVS components: On-Line vs. Off-Line Strategies

    Get PDF
    During the last years, the DEVS community provides many contributions towards the realization of a world-wide platform for collaborative Modeling & Simulation. The goal of such a platform would be to enable the sharing and reuse of models between scientists, as well as the seamless simulation of distributed and heterogeneous models. Therefore, one of the major research fields is the definition of architectures for integrating heterogeneous DEVS components, meaning simulators and/or models written in different frameworks and programming languages. In this work, we present three different strategies for providing such interoperability between DEVS components. The first focuses on standardizing exchanges between simulators, and has been explored in previous works. The two others strategies are more prospective; in keeping with Model-Driven Engineering, they place the model at the center of their architecture and make extensive use of model transformations. To make this possible, we defined a platform and language-independent format for describing and sharing DEVS models, called DEVS Markup Language
    corecore