72,704 research outputs found

    Encore EDS Task Force Final Summary Report (6.4.14)

    Get PDF
    Final report of the Encore EDS Task Force recommending that, After careful consideration, trial and evaluation, the task force reached consensus that HELIN should move forward in switching from Serials Solutions to EBSCO\u27s A-Z with LinkSource product for 2014-2015. This will allow each HELIN library to decide for themselves whether to start using Encore Duet immediately, or to hold off until upcoming enhancements become available. In order to be ready for the beginning of the 2014 fall semester, the task force recommends that the transition to EBSCO A to Z with Linksource be made as soon as possible and that the HELIN Board and Central Office communicate a timeline for implementation of EBSCO A to Z with Linksource and Encore Duet to the HELIN Libraries

    Integrating digital document acquisition into a university library : A case study of social and organizational challenges

    Get PDF
    In this article we report on the effort of the university library of the Vienna University of Economics and Business Administration to integrate a digital library component for research documents authored at the university into the existing library infrastructure. Setting up a digital library has become a relatively easy task using the current data base technology and the components and tools freely available. However, to integrate such a digital library into existing library systems and to adapt existing document acquisition work-flows in the organization are non-trivial tasks. We use a research frame work to identify the key players in this change process and to analyze their incentive structures. Then we describe the light-weight integration approach employed by our university and show how it provides incentives to the key players and at the same time requires only minimal adaptation of the organization in terms of changing existing work-flows. Our experience suggests that this light-weight integration offers a cost efficient and low risk intermediate step towards switching to exclusive digital document acquisition

    Situational Enterprise Services

    Get PDF
    The ability to rapidly find potential business partners as well as rapidly set up a collaborative business process is desirable in the face of market turbulence. Collaborative business processes are increasingly dependent on the integration of business information systems. Traditional linking of business processes has a large ad hoc character. Implementing situational enterprise services in an appropriate way will deliver the business more flexibility, adaptability and agility. Service-oriented architectures (SOA) are rapidly becoming the dominant computing paradigm. It is now being embraced by organizations everywhere as the key to business agility. Web 2.0 technologies such as AJAX on the other hand provide good user interactions for successful service discovery, selection, adaptation, invocation and service construction. They also balance automatic integration of services and human interactions, disconnecting content from presentation in the delivery of the service. Another Web technology, such as semantic Web, makes automatic service discovery, mediation and composition possible. Integrating SOA, Web 2.0 Technologies and Semantic Web into a service-oriented virtual enterprise connects business processes in a much more horizontal fashion. To be able run these services consistently across the enterprise, an enterprise infrastructure that provides enterprise architecture and security foundation is necessary. The world is constantly changing. So does the business environment. An agile enterprise needs to be able to quickly and cost-effectively change how it does business and who it does business with. Knowing, adapting to diffident situations is an important aspect of today’s business environment. The changes in an operating environment can happen implicitly and explicitly. The changes can be caused by different factors in the application domain. Changes can also happen for the purpose of organizing information in a better way. Changes can be further made according to the users' needs such as incorporating additional functionalities. Handling and managing diffident situations of service-oriented enterprises are important aspects of business environment. In the chapter, we will investigate how to apply new Web technologies to develop, deploy and executing enterprise services

    Next-Generation EU DataGrid Data Management Services

    Full text link
    We describe the architecture and initial implementation of the next-generation of Grid Data Management Middleware in the EU DataGrid (EDG) project. The new architecture stems out of our experience and the users requirements gathered during the two years of running our initial set of Grid Data Management Services. All of our new services are based on the Web Service technology paradigm, very much in line with the emerging Open Grid Services Architecture (OGSA). We have modularized our components and invested a great amount of effort towards a secure, extensible and robust service, starting from the design but also using a streamlined build and testing framework. Our service components are: Replica Location Service, Replica Metadata Service, Replica Optimization Service, Replica Subscription and high-level replica management. The service security infrastructure is fully GSI-enabled, hence compatible with the existing Globus Toolkit 2-based services; moreover, it allows for fine-grained authorization mechanisms that can be adjusted depending on the service semantics.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla,Ca, USA, March 2003 8 pages, LaTeX, the file contains all LaTeX sources - figures are in the directory "figures

    An open reply to "What is going on at the Library of Congress?" by Thomas Mann

    Get PDF
    This is an open response to a report by Thomas Mann at the Library of Congress concerning changes in cataloging. The author contends that, although the current changes at the Library of Congress are suspect, changes are imminent and experienced catalogers must offer positive suggestions for change, otherwise they will be ignored by management

    The Dark Energy Survey Data Management System

    Full text link
    The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on Astronomical Instrumentation (held in Marseille in June 2008). This preprint is made available with the permission of SPIE. Further information together with preprint containing full quality images is available at http://desweb.cosmology.uiuc.edu/wik

    SHEER “smart” database: technical note

    Get PDF
    The SHEER database brings together a large amount of data of various types: interdisciplinary site data from seven independent episodes, research data and those for the project results dissemination process. This concerns mainly shale gas exploitation test sites, processing procedures, results of data interpretation and recommendations. The smart SHEER database harmonizes data from different fields (geophysical, geochemical, geological, technological, etc.), creates and provides access to an advanced database of case studies of environmental impact indicators associated with shale gas exploitation and exploration, which previously did not exist. A unique component of the SHEER database comes from the monitoring activity performed during the project in one active shale gas exploration and exploitation site at Wysin, Poland, which started from the pre-operational phase. The SHEER database is capable of the adoption of new data such as results of other Work Packages and has developed an over-arching structure for higher-level integration

    Adapting SAM for CDF

    Full text link
    The CDF and D0 experiments probe the high-energy frontier and as they do so have accumulated hundreds of Terabytes of data on the way to petabytes of data over the next two years. The experiments have made a commitment to use the developing Grid based on the SAM system to handle these data. The D0 SAM has been extended for use in CDF as common patterns of design emerged to meet the similar requirements of these experiments. The process by which the merger was achieved is explained with particular emphasis on lessons learned concerning the database design patterns plus realization of the use cases.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, pdf format, TUAT00
    corecore