4,267 research outputs found

    In search of the right literature search engine(s)

    Get PDF
    *Background*
Collecting scientific publications related to a specific topic is crucial for different phases of research, health care and ‘effective text mining’. Available bio-literature search engines vary in their ability to scan different sections of articles, for the user-provided search terms and/or phrases. Since a thorough scientific analysis of all major bibliographic tools has not been done, their selection has often remained subjective. We have considered most of the existing bio-literature search engines (http://www.shodhaka.com/startbioinfo/LitSearch.html) and performed an extensive analysis of 18 literature search engines, over a period of about 3 years. Eight different topics were taken and about 50 searches were performed using the selected search engines. The relevance of retrieved citations was carefully assessed after every search, to estimate the citation retrieval efficiency. Different other features of the search tools were also compared using a semi-quantitative method.
*Results*
The study provides the first tangible comparative account of relative retrieval efficiency, input and output features, resource coverage and a few other utilities of the bio-literature search tools. The results show that using a single search tool can lead to loss of up to 75% relevant citations in some cases. Hence, use of multiple search tools is recommended. But, it would also not be practical to use all or too many search engines. The detailed observations made in the study can assist researchers and health professionals in making a more objective selection among the search engines. A corollary study revealed relative advantages and disadvantages of the full-text scanning tools.
*Conclusion*
While many studies have attempted to compare literature search engines, important questions remained unanswered till date. Following are some of those questions, along with answers provided by the current study:
a)	Which tools should be used to get the maximum number of relevant citations with a reasonable effort? ANSWER: _Using PubMed, Scopus, Google Scholar and HighWire Press individually, and then compiling the hits into a union list is the best option. Citation-Compiler (http://www.shodhaka.com/compiler) can help to compile the results from each of the recommended tool._
b)	What is the approximate percentage of relevant citations expected to be lost if only one search engine is used? ANSWER: _About 39% of the total relevant citations were lost in searches across 4 topics; 49% hits were lost while using PubMed or HighWire Press, while 37% and 20% loss was noticed while using Google Scholar and Scopus, respectively._ 
c)	Which full text search engines can be recommended in general? ANSWER: _HighWire Press and Google Scholar._
d)	Among the mostly used search engines, which one can be recommended for best precision? ANSWER: _EBIMed._
e)	Among the mostly used search engines, which one can be recommended for best recall? ANSWER: _Depending on the type of query used, best recall could be obtained by HighWire Press or Scopus.

    SNAP: Stateful Network-Wide Abstractions for Packet Processing

    Full text link
    Early programming languages for software-defined networking (SDN) were built on top of the simple match-action paradigm offered by OpenFlow 1.0. However, emerging hardware and software switches offer much more sophisticated support for persistent state in the data plane, without involving a central controller. Nevertheless, managing stateful, distributed systems efficiently and correctly is known to be one of the most challenging programming problems. To simplify this new SDN problem, we introduce SNAP. SNAP offers a simpler "centralized" stateful programming model, by allowing programmers to develop programs on top of one big switch rather than many. These programs may contain reads and writes to global, persistent arrays, and as a result, programmers can implement a broad range of applications, from stateful firewalls to fine-grained traffic monitoring. The SNAP compiler relieves programmers of having to worry about how to distribute, place, and optimize access to these stateful arrays by doing it all for them. More specifically, the compiler discovers read/write dependencies between arrays and translates one-big-switch programs into an efficient internal representation based on a novel variant of binary decision diagrams. This internal representation is used to construct a mixed-integer linear program, which jointly optimizes the placement of state and the routing of traffic across the underlying physical topology. We have implemented a prototype compiler and applied it to about 20 SNAP programs over various topologies to demonstrate our techniques' scalability

    PONDER - A Real time software backend for pulsar and IPS observations at the Ooty Radio Telescope

    Full text link
    This paper describes a new real-time versatile backend, the Pulsar Ooty Radio Telescope New Digital Efficient Receiver (PONDER), which has been designed to operate along with the legacy analog system of the Ooty Radio Telescope (ORT). PONDER makes use of the current state of the art computing hardware, a Graphical Processing Unit (GPU) and sufficiently large disk storage to support high time resolution real-time data of pulsar observations, obtained by coherent dedispersion over a bandpass of 16 MHz. Four different modes for pulsar observations are implemented in PONDER to provide standard reduced data products, such as time-stamped integrated profiles and dedispersed time series, allowing faster avenues to scientific results for a variety of pulsar studies. Additionally, PONDER also supports general modes of interplanetary scintillation (IPS) measurements and very long baseline interferometry data recording. The IPS mode yields a single polarisation correlated time series of solar wind scintillation over a bandwidth of about four times larger (16 MHz) than that of the legacy system as well as its fluctuation spectrum with high temporal and frequency resolutions. The key point is that all the above modes operate in real time. This paper presents the design aspects of PONDER and outlines the design methodology for future similar backends. It also explains the principal operations of PONDER, illustrates its capabilities for a variety of pulsar and IPS observations and demonstrates its usefulness for a variety of astrophysical studies using the high sensitivity of the ORT.Comment: 25 pages, 14 figures, Accepted by Experimental Astronom

    TTC: A Tensor Transposition Compiler for Multiple Architectures

    Full text link
    We consider the problem of transposing tensors of arbitrary dimension and describe TTC, an open source domain-specific parallel compiler. TTC generates optimized parallel C++/CUDA C code that achieves a significant fraction of the system's peak memory bandwidth. TTC exhibits high performance across multiple architectures, including modern AVX-based systems (e.g.,~Intel Haswell, AMD Steamroller), Intel's Knights Corner as well as different CUDA-based GPUs such as NVIDIA's Kepler and Maxwell architectures. We report speedups of TTC over a meaningful baseline implementation generated by external C++ compilers; the results suggest that a domain-specific compiler can outperform its general purpose counterpart significantly: For instance, comparing with Intel's latest C++ compiler on the Haswell and Knights Corner architecture, TTC yields speedups of up to 8×8\times and 32×32\times, respectively. We also showcase TTC's support for multiple leading dimensions, making it a suitable candidate for the generation of performance-critical packing functions that are at the core of the ubiquitous BLAS 3 routines

    Supporting task creation inside FPGA devices

    Get PDF
    The most common model to use co-processors/accelerators is the master-slave model where the slaves (coprocessors/ accelerators) are driven by a general purpose cpu. This simplifies the management of the accelerators because they cannot actively interact with the runtime and they are just passive slaves that operate over the memory under demand. However, the master-slave model limits system possibilities and introduces synchronization overheads that could be avoided. To overcome those limitations and increase the possibilities of accelerators, we propose extending task based programming models (like OpenMP [1] or OmpSs) to support some runtime APIs inside the FPGA co-processor. As a proof-of-concept, we implemented our proposal over the OmpSs@FPGA environment [2] adding the needed infrastructure in the FPGA bitstream and modifying the existing tools to support creation of children tasks inside a task offloaded to an FPGA accelerator. In addition, we added support to synchronize the children tasks created by a FPGA task regardless they are executed in a SMP host thread or they also target another FPGA accelerator in the same co-processor
    • 

    corecore