394 research outputs found

    New hardware support transactional memory and parallel debugging in multicore processors

    Get PDF
    This thesis contributes to the area of hardware support for parallel programming by introducing new hardware elements in multicore processors, with the aim of improving the performance and optimize new tools, abstractions and applications related with parallel programming, such as transactional memory and data race detectors. Specifically, we configure a hardware transactional memory system with signatures as part of the hardware support, and we develop a new hardware filter for reducing the signature size. We also develop the first hardware asymmetric data race detector (which is also able to tolerate them), based also in hardware signatures. Finally, we propose a new module of hardware signatures that solves some of the problems that we found in the previous tools related with the lack of flexibility in hardware signatures

    XML Integrated Environment for Service-Oriented Data Management

    Get PDF
    The proliferation of XML as a family of related standards including a markup language (XML), formatting semantics (XSL style sheets), a linking syntax (XLINK), and appropriate data schema standards have emerged as a de facto standard for encoding and sharing data between various applications. XML is designed to be simple, easily parsed and self-describing. XML is based on and support the idea of separation of concerns: information content is separated from information rendering, and relationships between data elements are provided via simple nesting and references. As the XML content grows, the ability to handle schemaless XML documents becomes more critical as most XML documents do not have schema or Document Type Definitions (DTDs). In addition, XML content and XML tools are often required to be combined in effective ways for better performance and higher flexibility. In this research, we proposed XML Integrated Environment (XIE) which is a general-purpose service-oriented architecture for processing XML documents in a scalable and efficient fashion. The XIE supports a new software service model that provides a proper abstraction to describe a service and divide it into four components: structure, connection, interface and logic. We also proposed and implemented XIE Service Language (XIESL) that can capture the creation and maintenance of the XML processes and the data flow specified by the user and then orchestrates the interactions between different XIE services. Moreover, XIESL manages the complexity of XML processing by implementing an XML processing pipeline that enables better management, control, interpretation and presentation of the XML data even for non-professional users. The XML Integrated Environment is envisioned to revolutionize the way non-professional programmers see, work and manage their XML assets. It offers them powerful tools and constructs to fully utilize the XML processing power embedded in its unified framework and service-oriented architecture

    An FPGA implementation of an investigative many-core processor, Fynbos : in support of a Fortran autoparallelising software pipeline

    Get PDF
    Includes bibliographical references.In light of the power, memory, ILP, and utilisation walls facing the computing industry, this work examines the hypothetical many-core approach to finding greater compute performance and efficiency. In order to achieve greater efficiency in an environment in which Moore’s law continues but TDP has been capped, a means of deriving performance from dark and dim silicon is needed. The many-core hypothesis is one approach to exploiting these available transistors efficiently. As understood in this work, it involves trading in hardware control complexity for hundreds to thousands of parallel simple processing elements, and operating at a clock speed sufficiently low as to allow the efficiency gains of near threshold voltage operation. Performance is there- fore dependant on exploiting a new degree of fine-grained parallelism such as is currently only found in GPGPUs, but in a manner that is not as restrictive in application domain range. While removing the complex control hardware of traditional CPUs provides space for more arithmetic hardware, a basic level of control is still required. For a number of reasons this work chooses to replace this control largely with static scheduling. This pushes the burden of control primarily to the software and specifically the compiler, rather not to the programmer or to an application specific means of control simplification. An existing legacy tool chain capable of autoparallelising sequential Fortran code to the degree of parallelism necessary for many-core exists. This work implements a many-core architecture to match it. Prototyping the design on an FPGA, it is possible to examine the real world performance of the compiler-architecture system to a greater degree than simulation only would allow. Comparing theoretical peak performance and real performance in a case study application, the system is found to be more efficient than any other reviewed, but to also significantly under perform relative to current competing architectures. This failing is apportioned to taking the need for simple hardware too far, and an inability to implement static scheduling mitigating tactics due to lack of support for such in the compiler

    Inglise-eesti masintõlge: hindamine erinevate mudelite ja arhitektuuri

    Get PDF
    This thesis is based on three main objectives: at first, the implementation of RNMT+ archi-tecture with Relational-RNN model. This is an interaction between this architecture and the RNN model. Secondly, train three different translation models based on RNMT+, Trans-former, and sequence to sequence architectures. Previously, we have witnessed the perfor-mance comparison among RNMT+ with LSTM, Transformer, seq2seq, etc. Finally, evalu-ate the translation model based on training data. When implementing RNMT+, the core idea was to use a newer type of Recurrent Neural Network (RNN) instead of a widely used LSTM or GRU. Besides this, we evaluate the RNMT+ model with other models based on state-of-the-art Transformer and Sequence to Sequence with attention architectures. This evaluation (BLEU) shows that neural machine translation is domain-dependent, and translation based on the Transformer model performs better than the other two in OpenSubtitle v2018 domain while RNMT+ model performs better compared to other two in a cross-domain evaluation. Additionally, we compare all the above-mentioned architectures based on their correspond-ing encoder-decoder layers, attention mechanism and other available neural machine translation and statistical machine translation architectures. In estonian: See lõputöö põhineb kolmel põhieesmärgil: alguses RNMT + arhitektuuri rakendamine Relatsioon-RNN-mudeli abil. See on interaktsioon selle arhitektuuri ja RNN-mudeli vahel. Teiseks, koolitage kolme erinevat tõlkemudelit, mis põhinevad RNMT +, Trafo ja järjestusearhitektuuridel. Varem oleme olnud tunnistajaks RNMT + jõudluse võrdlusele LSTM, Transformeri, seq2seq jne abil. Lõpuks hinnake tõlkemudelit koolitusandmete põhjal. RNMT + rakendamisel oli peamine idee kasutada laialdaselt kasutatava LSTM või GRU asemel uuemat tüüpi korduvat närvivõrku (RNN). Lisaks hindame RNMT + mudelit koos teiste mudelitega, mis põhinevad tipptehnoloogial Transformer ja Sequence to Sequence koos tähelepanu arhitektuuridega. See hinnang (BLEU) näitab, et neuraalne masintõlge on domeenist sõltuv ja muunduril Transformer põhinev tõlge toimib paremini kui ülejäänud kaks OpenSubtitle v2018 domeenis, samal ajal kui RNMT + mudel toimib paremini kui ülejäänud kaks domeenidevahelist hindamist. Lisaks võrdleme kõiki ülalnimetatud arhitektuure nende vastavate kodeerija-dekoodri kihtide, tähelepanu mehhanismi ja muude saadaolevate närvi masintõlke ning statistiliste masintõlke arhitektuuride põhjal

    A survey of large-scale reasoning on the Web of data

    Get PDF
    As more and more data is being generated by sensor networks, social media and organizations, the Webinterlinking this wealth of information becomes more complex. This is particularly true for the so-calledWeb of Data, in which data is semantically enriched and interlinked using ontologies. In this large anduncoordinated environment, reasoning can be used to check the consistency of the data and of asso-ciated ontologies, or to infer logical consequences which, in turn, can be used to obtain new insightsfrom the data. However, reasoning approaches need to be scalable in order to enable reasoning over theentire Web of Data. To address this problem, several high-performance reasoning systems, whichmainly implement distributed or parallel algorithms, have been proposed in the last few years. Thesesystems differ significantly; for instance in terms of reasoning expressivity, computational propertiessuch as completeness, or reasoning objectives. In order to provide afirst complete overview of thefield,this paper reports a systematic review of such scalable reasoning approaches over various ontologicallanguages, reporting details about the methods and over the conducted experiments. We highlight theshortcomings of these approaches and discuss some of the open problems related to performing scalablereasoning

    ESys.Net : a new .Net based system-level design environment

    Get PDF
    Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal

    Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation

    Get PDF
    Peer reviewe

    Linked Open Data - Creating Knowledge Out of Interlinked Data: Results of the LOD2 Project

    Get PDF
    Database Management; Artificial Intelligence (incl. Robotics); Information Systems and Communication Servic

    Progressive Network Deployment, Performance, and Control with Software-defined Networking

    Get PDF
    The inflexible nature of traditional computer networks has led to tightly-integrated systems that are inherently difficult to manage and secure. New designs move low-level network control into software creating software-defined networks (SDN). Augmenting an existing network with these enhancements can be expensive and complex. This research investigates solutions to these problems. It is hypothesized that an add-on device, or shim could be used to make a traditional switch behave as an OpenFlow SDN switch while maintaining reasonable performance. A design prototype is found to cause approximately 1.5% reduction in throughput for one ow and less than double increase in latency, showing that such a solution may be feasible. It is hypothesized that a new design built on event-loop and reactive programming may yield a controller that is higher-performing and easier to program. The library node-openflow is found to have performance approaching that of professional controllers, however it exhibits higher variability in response rate. The framework rxdn is found to exceed performance of two comparable controllers by at least 33% with statistical significance in latency mode with 16 simulated switches, but is slower than the library node-openflow or professional controllers (e.g., Libfluid, ONOS, and NOX). Collectively, this work enhances the tools available to researchers, enabling experimentation and development toward more sustainable and secure infrastructur
    corecore