622,689 research outputs found

    Lifecycle information for e-literature: full report from the LIFE project

    Get PDF
    This Report is a record of the LIFE Project. The Project has been run for one year and its aim is to deliver crucial information about the cost and management of digital material. This information should then in turn be able to be applied to any institution that has an interest in preserving and providing access to electronic collections. The Project is a joint venture between The British Library and UCL Library Services. The Project is funded by JISC under programme area (i) as listed in paragraph 16 of the JISC 4/04 circular- Institutional Management Support and Collaboration and as such has set requirements and outcomes which must be met and the Project has done its best to do so. Where the Project has been unable to answer specific questions, strong recommendations have been made for future Project work to do so. The outcomes of this Project are expected to be a practical set of guidelines and a framework within which costs can be applied to digital collections in order to answer the following questions: • What is the long term cost of preserving digital material; • Who is going to do it; • What are the long term costs for a library in HE/FE to partner with another institution to carry out long term archiving; • What are the comparative long-term costs of a paper and digital copy of the same publication; • At what point will there be sufficient confidence in the stability and maturity of digital preservation to switch from paper for publications available in parallel formats; • What are the relative risks of digital versus paper archiving. The Project has attempted to answer these questions by using a developing lifecycle methodology and three diverse collections of digital content. The LIFE Project team chose UCL e-journals, BL Web Archiving and the BL VDEP digital collections to provide a strong challenge to the methodology as well as to help reach the key Project aim of attributing long term cost to digital collections. The results from the Case Studies and the Project findings are both surprising and illuminating

    Identifying Novel Leads Using Combinatorial Libraries: Issues and Successes

    Get PDF
    Chemically generated libraries of small, non-oligomeric compounds are being widely embraced by researchers in both industry and academia. There has been a steady development of new chemistries and equipment applied to library generation so it is now possible to synthesize almost any desired class of compound. However, there are still important issues to consider that range from what specific types of compounds should be made to concerns such as sample resynthesis, structural confirmation of the hit identified, and how to best integrate this technology into a pharmaceutical drug discovery operation. This paper illustrates our approach to new lead discovery (individual, diverse, drug-like molecules of known structural identity using a simple, spatially addressable parallel synthesis approach to prepare Multiple Diverse as well as Universal Libraries) and describes some representative examples of chemistries we had developed within these approaches (preparation of bis-benzamide phenols, thiophenes, pyrrolidines, and highly substituted biphenyls). Finally, the manuscript concludes by addressing some the present concerns that still must be considered in this field

    Automatic Code-Generation Techniques for Micro-Threaded RISC Architectures

    Get PDF
    Submitted to the University of Hertfordshire in partial fulfillment of the requirements of the degree of Master of Science by research.There has been an ever-widening gap between processor and memory speeds, resulting in a 'memory wall' where the time for memory accesses dominates performance. To counter this, architectures that use many very small threads that allow multiple memory accesses to occur in parallel have been under investigation. Examples of these architectures are the CARE (Compiler Aided Reorder Engine) architecture, micro-threading architectures and cellular architectures, such as the IBM Cyclops family, implementing using processors-in-memory (PIM), which is the main architecture discussed in this thesis. PIM architectures achieve high performance by increasing the bandwidth of the processor to memory communication and reducing that latency, via the use of many processors physically close to the main memory. These massively parallel architectures may have sophisticated memory models, and I contend that there is an open question regarding what may be the ideal approach to implementing parallelism, via using many threads, from the programmer's perspective. Should the implementation be at language-level such as UPC, HPF or other language extensions, alternatively within the compiler using trace-scheduling? Or should it be at library-level, for example OpenMP or POSIX-threads? Or perhaps within the architecture, such as designs derived from data-flow architectures? In this thesis, DIMES (the Delaware Iterative Multiprocessor Emulation System), which is being developed by CAPSL at the University of Delaware, was used as a hardware evaluation tool for such cellular architectures. As the programing example, the author chose to use a threaded Mandelbrot-set generator with a work-stealing algorithm to evaluate the DIMES cthread programming model. This implementation was used to identify potential problems and issues that may occur when attempting to implement massive number of very short-lived threads

    State-of-the-Art in Parallel Computing with R

    Get PDF
    R is a mature open-source programming language for statistical computing and graphics. Many areas of statistical research are experiencing rapid growth in the size of data sets. Methodological advances drive increased use of simulations. A common approach is to use parallel computing. This paper presents an overview of techniques for parallel computing with R on computer clusters, on multi-core systems, and in grid computing. It reviews sixteen different packages, comparing them on their state of development, the parallel technology used, as well as on usability, acceptance, and performance. Two packages (snow, Rmpi) stand out as particularly useful for general use on computer clusters. Packages for grid computing are still in development, with only one package currently available to the end user. For multi-core systems four different packages exist, but a number of issues pose challenges to early adopters. The paper concludes with ideas for further developments in high performance computing with R. Example code is available in the appendix

    Components and Interfaces of a Process Management System for Parallel Programs

    Full text link
    Parallel jobs are different from sequential jobs and require a different type of process management. We present here a process management system for parallel programs such as those written using MPI. A primary goal of the system, which we call MPD (for multipurpose daemon), is to be scalable. By this we mean that startup of interactive parallel jobs comprising thousands of processes is quick, that signals can be quickly delivered to processes, and that stdin, stdout, and stderr are managed intuitively. Our primary target is parallel machines made up of clusters of SMPs, but the system is also useful in more tightly integrated environments. We describe how MPD enables much faster startup and better runtime management of parallel jobs. We show how close control of stdio can support the easy implementation of a number of convenient system utilities, even a parallel debugger. We describe a simple but general interface that can be used to separate any process manager from a parallel library, which we use to keep MPD separate from MPICH.Comment: 12 pages, Workshop on Clusters and Computational Grids for Scientific Computing, Sept. 24-27, 2000, Le Chateau de Faverges de la Tour, Franc

    The Personal Information Trainer

    Get PDF
    [Excerpt] The Personal Information Trainer (PIT) can become a unique employee benefit written into the employment contract of key individuals (very few) deemed to be essential to the success of a firm or institution. This is a no-extra-cost (non-compensatory) benefit that can help improve recruitment and retention of top talent and enhance the library’s value proposition. This concept is useful to human resource managers, libraries, and the institutions they serve. This article provides the fundamental concepts and constructs necessary to implement such a program with an emphasis on why and how this should be done
    corecore