18,905 research outputs found
Comparative evaluation of open source digital library packages
Paper presented at OSLS -2009In the last decade the way to publish and access information has been changed a lot because of the
availability of open access repositories. This became possible because of a varied range of open
source repository packages. Other reasons like every year deduction in library budget and
increasing urge to show more visibility of their institute, popular institutions also started
establishing repositories. This paper starts with giving brief introduction of different open access
models, open source philosophy and some of the expectations from a digital library package and
tries to evaluate DSpace, EPrints and Greenstone.
Purpose:
This paper tries to evaluate some of the most popular digital library packages. It can help digital
library administrators to decide among the available packages.
Methodology:
The evaluation is done by using a checklist having different categories. The categories are provided
with weights according their importance for the package.
Findings:
The study shows that most of the softwares are in developing stage but are good at providing a
good service. Among DSpace, EPrints and Greenstone. DSpace emerged as best option
The H.E.S.S. central data acquisition system
The High Energy Stereoscopic System (H.E.S.S.) is a system of Imaging
Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in
Namibia. It measures cosmic gamma rays of very high energies (VHE; >100 GeV)
using the Earth's atmosphere as a calorimeter. The H.E.S.S. Array entered Phase
II in September 2012 with the inauguration of a fifth telescope that is larger
and more complex than the other four. This paper will give an overview of the
current H.E.S.S. central data acquisition (DAQ) system with particular emphasis
on the upgrades made to integrate the fifth telescope into the array. At first,
the various requirements for the central DAQ are discussed then the general
design principles employed to fulfil these requirements are described. Finally,
the performance, stability and reliability of the H.E.S.S. central DAQ are
presented. One of the major accomplishments is that less than 0.8% of
observation time has been lost due to central DAQ problems since 2009.Comment: 17 pages, 8 figures, published in Astroparticle Physic
A review of traffic simulation software
Computer simulation of tra c is a widely used method in research of tra c modelling,
planning and development of tra c networks and systems. Vehicular tra c systems are of
growing concern and interest globally and modelling arbitrarily complex tra c systems is a
hard problem. In this article we review some of the tra c simulation software applications,
their features and characteristics as well as the issues these applications face. Additionally, we
introduce some algorithmic ideas, underpinning data structural approaches and quanti able
metrics that can be applied to simulated model systems
Making automated computer program documentation a feature of total system design
It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given
Large Language Models Based Automatic Synthesis of Software Specifications
Software configurations play a crucial role in determining the behavior of
software systems. In order to ensure safe and error-free operation, it is
necessary to identify the correct configuration, along with their valid bounds
and rules, which are commonly referred to as software specifications. As
software systems grow in complexity and scale, the number of configurations and
associated specifications required to ensure the correct operation can become
large and prohibitively difficult to manipulate manually. Due to the fast pace
of software development, it is often the case that correct software
specifications are not thoroughly checked or validated within the software
itself. Rather, they are frequently discussed and documented in a variety of
external sources, including software manuals, code comments, and online
discussion forums. Therefore, it is hard for the system administrator to know
the correct specifications of configurations due to the lack of clarity,
organization, and a centralized unified source to look at. To address this
challenge, we propose SpecSyn a framework that leverages a state-of-the-art
large language model to automatically synthesize software specifications from
natural language sources. Our approach formulates software specification
synthesis as a sequence-to-sequence learning problem and investigates the
extraction of specifications from large contextual texts. This is the first
work that uses a large language model for end-to-end specification synthesis
from natural language texts. Empirical results demonstrate that our system
outperforms prior the state-of-the-art specification synthesis tool by 21% in
terms of F1 score and can find specifications from single as well as multiple
sentences
Using Provenance to support Good Laboratory Practice in Grid Environments
Conducting experiments and documenting results is daily business of
scientists. Good and traceable documentation enables other scientists to
confirm procedures and results for increased credibility. Documentation and
scientific conduct are regulated and termed as "good laboratory practice."
Laboratory notebooks are used to record each step in conducting an experiment
and processing data. Originally, these notebooks were paper based. Due to
computerised research systems, acquired data became more elaborate, thus
increasing the need for electronic notebooks with data storage, computational
features and reliable electronic documentation. As a new approach to this, a
scientific data management system (DataFinder) is enhanced with features for
traceable documentation. Provenance recording is used to meet requirements of
traceability, and this information can later be queried for further analysis.
DataFinder has further important features for scientific documentation: It
employs a heterogeneous and distributed data storage concept. This enables
access to different types of data storage systems (e. g. Grid data
infrastructure, file servers). In this chapter we describe a number of building
blocks that are available or close to finished development. These components
are intended for assembling an electronic laboratory notebook for use in Grid
environments, while retaining maximal flexibility on usage scenarios as well as
maximal compatibility overlap towards each other. Through the usage of such a
system, provenance can successfully be used to trace the scientific workflow of
preparation, execution, evaluation, interpretation and archiving of research
data. The reliability of research results increases and the research process
remains transparent to remote research partners.Comment: Book Chapter for "Data Provenance and Data Management for eScience,"
of Studies in Computational Intelligence series, Springer. 25 pages, 8
figure
Maintenance Knowledge Management with Fusion of CMMS and CM
Abstract- Maintenance can be considered as an information, knowledge processing and management system. The management of knowledge resources in maintenance is a relatively new issue compared to Computerized Maintenance Management Systems (CMMS) and Condition Monitoring (CM) approaches and systems. Information Communication technologies (ICT) systems including CMMS, CM and enterprise administrative systems amongst others are effective in supplying data and in some cases information. In order to be effective the availability of high-quality knowledge, skills and expertise are needed for effective analysis and decision-making based on the supplied information and data. Information and data are not by themselves enough, knowledge, experience and skills are the key factors when maximizing the usability of the collected data and information. Thus, effective knowledge management (KM) is growing in importance, especially in advanced processes and management of advanced and expensive assets. Therefore efforts to successfully integrate maintenance knowledge management processes with accurate information from CMMSs and CM systems will be vital due to the increasing complexities of the overall systems.
Low maintenance effectiveness costs money and resources since normal and stable production cannot be upheld and maintained over time, lowered maintenance effectiveness can have a substantial impact on the organizations ability to obtain stable flows of income and control costs in the overall process. Ineffective maintenance is often dependent on faulty decisions, mistakes due to lack of experience and lack of functional systems for effective information exchange [10]. Thus, access to knowledge, experience and skills resources in combination with functional collaboration structures can be regarded as vital components for a high maintenance effectiveness solution.
Maintenance effectiveness depends in part on the quality, timeliness, accuracy and completeness of information related to machine degradation state, based on which decisions are made. Maintenance effectiveness, to a large extent, also depends on the quality of the knowledge of the managers and maintenance operators and the effectiveness of the internal & external collaborative environments. With emergence of intelligent sensors to measure and monitor the health state of the component and gradual implementation of ICT) in organizations, the conceptualization and implementation of E-Maintenance is turning into a reality. Unfortunately, even though knowledge management aspects are important in maintenance, the integration of KM aspects has still to find its place in E-Maintenance and in the overall information flows of larger-scale maintenance solutions. Nowadays, two main systems are implemented in most maintenance departments: Firstly, Computer Maintenance Management Systems (CMMS), the core of traditional maintenance record-keeping practices that often facilitate the usage of textual descriptions of faults and actions performed on an asset. Secondly, condition monitoring systems (CMS). Recently developed (CMS) are capable of directly monitoring asset components parameters; however, attempts to link observed CMMS events to CM sensor measurements have been limited in their approach and scalability. In this article we present one approach for addressing this challenge. We argue that understanding the requirements and constraints in conjunction - from maintenance, knowledge management and ICT perspectives - is necessary. We identify the issues that need be addressed for achieving successful integration of such disparate data types and processes (also integrating knowledge management into the âdata typesâ and processes)
Taking statistical machine translation to the student translator
Despite the growth of statistical machine translation (SMT) research and development in recent years, it remains somewhat out of reach for the translation community where programming expertise and knowledge of statistics tend not to be commonplace. While the concept of SMT is relatively straightforward, its implementation in functioning systems remains difficult for most, regardless of expertise. More recently, however, developments such as SmartMATE have emerged which aim to assist users in creating their own customized SMT systems and thus reduce the learning curve associated with SMT. In addition to commercial uses, translator training stands to benefit from such increased levels of inclusion and access to state-of-the-art approaches to MT. In this paper we draw on experience in developing and evaluating a new syllabus in SMT for a cohort of post-graduate student translators: we identify several issues encountered in the introduction of student translators to SMT, and report on data derived from repeated measures questionnaires that aim to capture data on studentsâ self-efficacy in the use of SMT. Overall, results show that participants report significant increases in their levels of confidence and knowledge of MT in general, and of SMT in particular. Additional benefits â such as increased technical competence and confidence â and future refinements are also discussed
CRAY mini manual. Revision D
This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user
- âŠ