159,788 research outputs found

    An empirical evaluation of High-Level Synthesis languages and tools for database acceleration

    Get PDF
    High Level Synthesis (HLS) languages and tools are emerging as the most promising technique to make FPGAs more accessible to software developers. Nevertheless, picking the most suitable HLS for a certain class of algorithms depends on requirements such as area and throughput, as well as on programmer experience. In this paper, we explore the different trade-offs present when using a representative set of HLS tools in the context of Database Management Systems (DBMS) acceleration. More specifically, we conduct an empirical analysis of four representative frameworks (Bluespec SystemVerilog, Altera OpenCL, LegUp and Chisel) that we utilize to accelerate commonly-used database algorithms such as sorting, the median operator, and hash joins. Through our implementation experience and empirical results for database acceleration, we conclude that the selection of the most suitable HLS depends on a set of orthogonal characteristics, which we highlight for each HLS framework.Peer ReviewedPostprint (author’s final draft

    Analytical Challenges in Modern Tax Administration: A Brief History of Analytics at the IRS

    Get PDF

    NIMASTEP: a software to modelize, study and analyze the dynamics of various small objects orbiting specific bodies

    Full text link
    NIMASTEP is a dedicated numerical software developed by us, which allows one to integrate the osculating motion (using cartesian coordinates) in a Newtonian approach of an object considered as a point-mass orbiting a homogeneous central body that rotates with a constant rate around its axis of smallest inertia. The code can be applied to objects such as particles, artificial or natural satellites or space debris. The central body can be either any terrestrial planet of the solar system, any dwarf-planet, or even an asteroid. In addition, very many perturbations can be taken into account, such as the combined third-body attraction of the Sun, the Moon, or the planets, the direct solar radiation pressure (with the central body shadow), the non-homogeneous gravitational field caused by the non-sphericity of the central body, and even some thrust forces. The simulations were performed using different integration algorithms. Two additional tools were integrated in the software package; the indicator of chaos MEGNO and the frequency analysis NAFF. NIMASTEP is designed in a flexible modular style and allows one to (de)select very many options without compromising the performance. It also allows one to easily add other possibilities of use. The code has been validated through several tests such as comparisons with numerical integrations made with other softwares or with semi-analytical and analytical studies. The various possibilities of NIMASTEP are described and explained and some tests of astrophysical interest are presented. At present, the code is proprietary but it will be released for use by the community in the near future. Information for contacting its authors and (in the near future) for obtaining the software are available on the web site http://www.fundp.ac.be/en/research/projects/page_view/10278201/Comment: Astronomy & Astrophysics - Received: 25 November 2011 / Accepted: 27 February 2012 -- 14 pages, 4 figure

    Fundamental principles in drawing inference from sequence analysis

    No full text
    Individual life courses are dynamic and can be represented as a sequence of states for some portion of their experiences. More generally, study of such sequences has been made in many fields around social science; for example, sociology, linguistics, psychology, and the conceptualisation of subjects progressing through a sequence of states is common. However, many models and sets of data allow only for the treatment of aggregates or transitions, rather than interpreting whole sequences. The temporal aspect of the analysis is fundamental to any inference about the evolution of the subjects but assumptions about time are not normally made explicit. Moreover, without a clear idea of what sequences look like, it is impossible to determine when something is not seen whether it was not actually there. Some principles are proposed which link the ideas of sequences, hypothesis, analytical framework, categorisation and representation; each one being underpinned by the consideration of time. To make inferences about sequences, one needs to: understand what these sequences represent; the hypothesis and assumptions that can be derived about sequences; identify the categories within the sequences; and data representation at each stage. These ideas are obvious in themselves but they are interlinked, imposing restrictions on each other and on the inferences which can be draw

    Assembling the Tree of Life in Europe (AToLE)

    Get PDF
    A network of scientists under the umbrella of 'Assembling the Tree of Life in Europe (AToLE)' seeks funding under the FP7-Theme: Cooperation - Environment (including Climate Change and Biodiversity Conservation) programme of the European Commission.
&#xa

    Review of analytical instruments for EEG analysis

    Full text link
    Since it was first used in 1926, EEG has been one of the most useful instruments of neuroscience. In order to start using EEG data we need not only EEG apparatus, but also some analytical tools and skills to understand what our data mean. This article describes several classical analytical tools and also new one which appeared only several years ago. We hope it will be useful for those researchers who have only started working in the field of cognitive EEG
    corecore