296 research outputs found

    Improving low latency applications for reconfigurable devices

    Get PDF
    This thesis seeks to improve low latency application performance via architectural improvements in reconfigurable devices. This is achieved by improving resource utilisation and access, and by exploiting the different environments within which reconfigurable devices are deployed. Our first contribution leverages devices deployed at the network level to enable the low latency processing of financial market data feeds. Financial exchanges transmit messages via two identical data feeds to reduce the chance of message loss. We present an approach to arbitrate these redundant feeds at the network level using a Field-Programmable Gate Array (FPGA). With support for any messaging protocol, we evaluate our design using the NASDAQ TotalView-ITCH, OPRA, and ARCA data feed protocols, and provide two simultaneous outputs: one prioritising low latency, and one prioritising high reliability with three dynamically configurable windowing methods. Our second contribution is a new ring-based architecture for low latency, parallel access to FPGA memory. Traditional FPGA memory is formed by grouping block memories (BRAMs) together and accessing them as a single device. Our architecture accesses these BRAMs independently and in parallel. Targeting memory-based computing, which stores pre-computed function results in memory, we benefit low latency applications that rely on: highly-complex functions; iterative computation; or many parallel accesses to a shared resource. We assess square root, power, trigonometric, and hyperbolic functions within the FPGA, and provide a tool to convert Python functions to our new architecture. Our third contribution extends the ring-based architecture to support any FPGA processing element. We unify E heterogeneous processing elements within compute pools, with each element implementing the same function, and the pool serving D parallel function calls. Our implementation-agnostic approach supports processing elements with different latencies, implementations, and pipeline lengths, as well as non-deterministic latencies. Compute pools evenly balance access to processing elements across the entire application, and are evaluated by implementing eight different neural network activation functions within an FPGA.Open Acces

    A comparison framework and review of service brokerage solutions for cloud architectures

    Get PDF
    Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework.We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks

    Video Processing Acceleration using Reconfigurable Logic and Graphics Processors

    No full text
    A vexing question is `which architecture will prevail as the core feature of the next state of the art video processing system?' This thesis examines the substitutive and collaborative use of the two alternatives of the reconfigurable logic and graphics processor architectures. A structured approach to executing architecture comparison is presented - this includes a proposed `Three Axes of Algorithm Characterisation' scheme and a formulation of perfor- mance drivers. The approach is an appealing platform for clearly defining the problem, assumptions and results of a comparison. In this work it is used to resolve the advanta- geous factors of the graphics processor and reconfigurable logic for video processing, and the conditions determining which one is superior. The comparison results prompt the exploration of the customisable options for the graphics processor architecture. To clearly define the architectural design space, the graphics processor is first identifed as part of a wider scope of homogeneous multi-processing element (HoMPE) architectures. A novel exploration tool is described which is suited to the investigation of the customisable op- tions of HoMPE architectures. The tool adopts a systematic exploration approach and a high-level parameterisable system model, and is used to explore pre- and post-fabrication customisable options for the graphics processor. A positive result of the exploration is the proposal of a reconfigurable engine for data access (REDA) to optimise graphics processor performance for video processing-specific memory access patterns. REDA demonstrates the viability of the use of reconfigurable logic as collaborative `glue logic' in the graphics processor architecture

    Measurement-Based Worst-Case Execution Time Estimation Using the Coefficient of Variation

    Get PDF
    Extreme Value Theory (EVT) has been historically used in domains such as finance and hydrology to model worst-case events (e.g., major stock market incidences). EVT takes as input a sample of the distribution of the variable to model and fits the tail of that sample to either the Generalised Extreme Value (GEV) or the Generalised Pareto Distribution (GPD). Recently, EVT has become popular in real-time systems to derive worst-case execution time (WCET) estimates of programs. However, the application of EVT is not straightforward and requires a detailed analysis of, and customisation for, the particular problem at hand. In this article, we tailor the application of EVT to timing analysis. To that end, (1) we analyse the response time of different hardware resources (e.g., cache memories) and identify those that may lead to radically different types of execution time distributions. (2) We show that one of these distributions, known as mixture distribution, causes problems in the use of EVT. In particular, mixture distributions challenge not only properly selecting GEV/GPD parameters (i.e., location, scale and shape) but also determining the size of the sample to ensure that enough tail values are passed to EVT and that only tail values are used by EVT to fit GEV/GPD. Failing to select these parameters has a negative impact on the quality of the derived WCET estimates. We tackle these problems, by (3) proposing Measurement-Based Probabilistic Timing Analysis using the Coefficient of Variation (MBPTA-CV), a new mixture-distribution aware, WCET-suited MBPTA method that builds on recent EVT developments in other fields (e.g., finance) to automatically select the distribution parameters that best fit the maxima of the observed execution times. Our results on a simulation environment and a real board show that MBPTA-CV produces high-quality WCET estimates.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007- 2013] under the PROXIMA Project (www.proxima-project.eu), grant 611085. This work has also been par- tially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella was partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Creating and Applying an Evaluation Framework for the National Decision Support Programme in Scotland

    Get PDF
    Context: The Scottish Government recognises the importance of decision support to improve knowledge management in health and care settings as a strategic priority. To this end, they funded the 2015 National Decision Support Roadmap. This laid out a plan for procuring and building a Decision Support Platform delivering a range of small-scale demonstrators (including several mobile platforms for specific user groups e.g. polypharmacy and diabetes), and building clinician and policy engagement for further funding. Aims: We were commissioned to undertake a formative evaluation of the National Decision Support Programme to help facilitate the effective roll-out of systems included in the Roadmap more widely. Methods: We collected qualitative data through a series of in-depth interviews and observations of workshops demonstrating technological systems. Participants included policy makers and clinical leads involved in the National Decision Support Programme. As the Programme was in the early stages of strategy development and system implementation at the time of data collection, we focused on exploring expectations and drivers of Cambio (a pilot platform) being tested in primary care. This system delivers an open standards based algorithm editor and engine which is linked with bespoke decision support applications delivered as web and mobile products and integrated into primary care electronic health record systems. The web and mobile solutions linked to the Cambio algorithms platform were developed by Scottish partners (Tactuum and University of West of Scotland). Employing a flexible methodological approach tailored to changing circumstances and need offered important opportunities for realising true impact through ongoing formative feedback to policymakers and active engagement of key clinical stakeholders. Our work was informed by sociotechnical principles and a health information infrastructure perspective. Qualitative data were coded with the help of NVivo software and analysed through a combination of inductive and deductive approaches. Findings: We collected data through 30 interviews and eight non-participant ethnographic observations of early stakeholder engagement workshops. We developed and applied a theoretically-informed evaluation framework, which we refined throughout our analysis. Overall, we observed a strong sense of support from all stakeholders for Cambio as an exemplar of an open standards based, customisable decision support platform, and proposals to roll this model out across NHS Scotland. Strategic drivers included facilitating integration of care, preventative care, patient self-management, shared decision-making, patient engagement, and the availability of information. However, in order to achieve desired benefits, participants highlighted the need for strong national leadership, system usability (which was perceived to be negatively affected by alert fatigue and integration with existing systems), and ongoing monitoring of potential unintended consequences emerging from implementations (e.g. clinical workloads). Conclusions and implications: In order to address potential tensions between national leadership and local usability as well as unintended consequences, there is a need to have overall national ownership to support the implementation of the Roadmap, whilst the implementation of individual applications needs to be devolved. This could be achieved through allowing a degree of local customisation of systems and tailoring of alerts, ongoing system development with continuing stakeholder engagement including “hands-on” experience for clinicians, a limited number of pilots that are carefully evaluated to mitigate emerging risks early, and development of a nuanced benefits realisation framework that combines smaller and locally relevant measures determined by implementing sites with national progress measures

    Designing Institutional Infrastructure for E-Science

    Get PDF
    A new generation of information and communication infrastructures, including advanced Internet computing and Grid technologies, promises more direct and shared access to more widely distributed computing resources than was previously possible. Scientific and technological collaboration, consequently, is more and more dependent upon access to, and sharing of digital research data. Thus, the U.S. NSF Directorate committed in 2005 to a major research funding initiative, “Cyberinfrastructure Vision for 21st Century Discovery”. These investments are aimed at enhancement of computer and network technologies, and the training of researchers. Animated by much the same view, the UK e-Science Core Programme has preceded the NSF effort in funding development of an array of open standard middleware platforms, intended to support Grid enabled science and engineering research. This proceeds from the sceptical view that engineering breakthroughs alone will not be enough to achieve the outcomes envisaged. Success in realizing the potential of e-Science—through the collaborative activities supported by the "cyberinfrastructure," if it is to be achieved, will be the result of a nexus of interrelated social, legal, and technical transformations.e-science, cyberinfrastructure, information sharing, research

    TOWARDS INSTITUTIONAL INFRASTRUCTURES FOR E-SCIENCE: The Scope of the Challenge

    Get PDF
    The three-fold purpose of this Report to the Joint Information Systems Committee (JISC) of the Research Councils (UK) is to: • articulate the nature and significance of the non-technological issues that will bear on the practical effectiveness of the hardware and software infrastructures that are being created to enable collaborations in e- Science; • characterise succinctly the fundamental sources of the organisational and institutional challenges that need to be addressed in regard to defining terms, rights and responsibilities of the collaborating parties, and to illustrate these by reference to the limited experience gained to date in regard to intellectual property, liability, privacy, and security and competition policy issues affecting scientific research organisations; and • propose approaches for arriving at institutional mechanisms whose establishment would generate workable, specific arrangements facilitating collaboration in e-Science; and, that also might serve to meet similar needs in other spheres such as e- Learning, e-Government, e-Commerce, e-Healthcare. In carrying out these tasks, the report examines developments in enhanced computer-mediated telecommunication networks and digital information technologies, and recent advances in technologies of collaboration. It considers the economic and legal aspects of scientific collaboration, with attention to interactions between formal contracting and 'private ordering' arrangements that rest upon research community norms. It offers definitions of e-Science, virtual laboratories, collaboratories, and develops a taxonomy of collaborative e-Science activities which is implemented to classify British e-Science pilot projects and contrast these with US collaboratory projects funded during the 1990s. The approach to facilitating inter-organizational participation in collaborative projects rests upon the development of a modular structure of contractual clauses that permit flexibility and experience-based learning.
    corecore