1,017 research outputs found

    Developing a requirements management toolset: Lessons learned

    Full text link
    Requirements Engineering (RE) is a multi-faceted discipline involving various methods, techniques and tools. RE researchers and practitioners are emphasizing the importance of having an integrated RE process. The need for an integrated toolset to support the effective management of such an integrated RE process cannot be over-emphasized. Tools integration has been identified as an important next step toward the future of requirements management tools. This paper reports on some of the significant architectural and technical issues encountered and the lessons learned in the process of developing an integrated Requirements Management (RM) Toolset: PARsed Natural language Input Processor (PARSNIP) by integrating various independent tools. This paper provides insights on architectural and technological issues typical of these types of projects, the approaches and techniques used to address the architectural mismatches and the technological incompatibilities

    Translating Timing into an Architecture: The Synergy of COTSon and HLS (Domain Expertise—Designing a Computer Architecture via HLS)

    Get PDF
    Translating a system requirement into a low-level representation (e.g., register transfer level or RTL) is the typical goal of the design of FPGA-based systems. However, the Design Space Exploration (DSE) needed to identify the final architecture may be time consuming, even when using high-level synthesis (HLS) tools. In this article, we illustrate our hybrid methodology, which uses a frontend for HLS so that the DSE is performed more rapidly by using a higher level abstraction, but without losing accuracy, thanks to the HP-Labs COTSon simulation infrastructure in combination with our DSE tools (MYDSE tools). In particular, this proposed methodology proved useful to achieve an appropriate design of a whole system in a shorter time than trying to design everything directly in HLS. Our motivating problem was to deploy a novel execution model called data-flow threads (DF-Threads) running on yet-to-be-designed hardware. For that goal, directly using the HLS was too premature in the design cycle. Therefore, a key point of our methodology consists in defining the first prototype in our simulation framework and gradually migrating the design into the Xilinx HLS after validating the key performance metrics of our novel system in the simulator. To explain this workflow, we first use a simple driving example consisting in the modelling of a two-way associative cache. Then, we explain how we generalized this methodology and describe the types of results that we were able to analyze in the AXIOM project, which helped us reduce the development time from months/weeks to days/hours

    Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control

    Get PDF
    The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimentation trade-off that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of linear autoregression with unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and policy function need not be continuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors. Our findings are cautiously optimistic in that new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors and thus establish a better reference point on the computational speed vs. coding complexity trade-off frontier. While further gains and wider applicability may lie behind steep learning barrier, we argue that the future of many computations belong to parallel algorithms anyway.Graphics Processing Units, CUDA programming, Dynamic programming, Learning, Experimentation

    Translating Timing into an Architecture: The Synergy of COTSon and HLS (Domain Expertise: Designing a Computer Architecture via HLS)

    Get PDF
    Translating a system requirement into a low-level representation (e.g., register transfer level or RTL) is the typical goal of the design of FPGA-based systems. However, the Design Space Exploration (DSE) needed to identify the final architecture may be time consuming, even when using high-level synthesis (HLS) tools. In this article, we illustrate our hybrid methodology, which uses a frontend for HLS so that the DSE is performed more rapidly by using a higher level abstraction, but without losing accuracy, thanks to the HP-Labs COTSon simulation infrastructure in combination with our DSE tools (MYDSE tools). In particular, this proposed methodology proved useful to achieve an appropriate design of a whole system in a shorter time than trying to design everything directly in HLS. Our motivating problem was to deploy a novel execution model called data-flow threads (DF-Threads) running on yet-to-be-designed hardware. For that goal, directly using the HLS was too premature in the design cycle. Therefore, a key point of our methodology consists in defining the first prototype in our simulation framework and gradually migrating the design into the Xilinx HLS after validating the key performance metrics of our novel system in the simulator. To explain this workflow, we first use a simple driving example consisting in the modelling of a two-way associative cache. Then, we explain how we generalized this methodology and describe the types of results that we were able to analyze in the AXIOM project, which helped us reduce the development time from months/weeks to days/hours

    The NTD Nanoscope: potential applications and implementations

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Nanopore transduction detection (NTD) offers prospects for a number of highly sensitive and discriminative applications, including: (i) single nucleotide polymorphism (SNP) detection; (ii) targeted DNA re-sequencing; (iii) protein isoform assaying; and (iv) biosensing via antibody or aptamer coupled molecules. Nanopore event transduction involves single-molecule biophysics, engineered information flows, and nanopore cheminformatics. The NTD Nanoscope has seen limited use in the scientific community, however, due to lack of information about potential applications, and lack of availability for the device itself. Meta Logos Inc. is developing both pre-packaged device platforms and component-level (unassembled) kit platforms (the latter described here). In both cases a lipid bi-layer workstation is first established, then augmentations and operational protocols are provided to have a nanopore transduction detector. In this paper we provide an overview of the NTD Nanoscope applications and implementations. The NTD Nanoscope Kit, in particular, is a component-level reproduction of the standard NTD device used in previous research papers.</p> <p>Results</p> <p>The NTD Nanoscope method is shown to functionalize a single nanopore with a channel current modulator that is designed to transduce events, such as binding to a specific target. To expedite set-up in new lab settings, the calibration and troubleshooting for the NTD Nanoscope kit components and signal processing software, the NTD Nanoscope Kit, is designed to include a set of test buffers and control molecules based on experiments described in previous NTD papers (the model systems briefly described in what follows). The description of the Server-interfacing for advanced signal processing support is also briefly mentioned.</p> <p>Conclusions</p> <p>SNP assaying, SNP discovery, DNA sequencing and RNA-seq methods are typically limited by the accuracy of the error rate of the enzymes involved, such as methods involving the polymerase chain reaction (PCR) enzyme. The NTD Nanoscope offers a means to obtain higher accuracy as it is a single-molecule method that does not inherently involve use of enzymes, using a functionalized nanopore instead.</p

    Data Visualization to Evaluate and Facilitate Targeted Data Acquisitions in Support of a Real-time Ocean Forecasting System

    Get PDF
    A robust evaluation toolset has been designed for Naval Research Laboratory’s Real-Time Ocean Forecasting System RELO with the purpose of facilitating an adaptive sampling strategy and providing a more educated guidance for routing underwater gliders. The major challenges are to integrate into the existing operational system, and provide a bridge between the modeling and operative environments. Visualization is the selected approach and the developed software is divided into 3 packages: The first package is to verify that the glider is actually following the waypoints and to predict the position of the glider for the next cycle’s instructions. The second package helps ensures that the delivered waypoints are both useful and feasible. The third package provides the confidence levels for the suggested path. This software’s implementation is in Python for portability and modularity to allow for easy expansion for new visuals

    A classification and review of tools for developing and interacting with machine learning systems

    Get PDF
    [Abstract] In this paper we aim to bring some order to the myriad of tools that have emerged in the field of Artificial Intelligence (AI), focusing on the field of Machine Learning (ML). For this purpose, we suggest a classification of the tools in which the categories are organized following the development lifecycle of an ML system and we make a review of the existing tools within each section of the classification. We believe this will help to better understand the ecosystem of tools currently available and will also allow us to identify niches in which to develop new tools to aid in the development of AI and ML systems. After reviewing the state-of-the-art of the tools, we have identified three trends in them: the incorporation of humans into the loop of the machine learning process, the movement from ad-hoc and experimental approaches to a more engineering perspective and the ability to make it easier to develop intelligent systems for people without an educational background in the area, in order to move the focus from the technical environment to the domain-specific problem.This work has been supported by the State Research Agency of the Spanish Government, grant (PID2019-107194GB-I00 / AEI / 10.13039/501100011033) and by the Xunta de Galicia, grant (ED431C 2018/34) with the European Union ERDF funds. We wish to acknowledge the support received from the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union (European Regional Development Fund-Galicia 2014-2020 Program), by grant ED431G 2019/01Xunta de Galicia; ED431C 2018/34Xunta de Galicia; ED431G 2019/0
    corecore