61,157 research outputs found

    Open source in libraries: Implementation of an open source ILMS at Asia e-University library

    Get PDF
    Open source systems for libraries have improved significantly to gain the confidence of librarians. The main focus of the paper is to describe the selection process and criteria that led to the implementation of Koha the first open source Integrated Library Management System at the AeU Library. A study was made based on the set criteria used to compare and contrast with the more popular propriety library management systems. The paper presents the findings of the study which led to the selection of Koha, and a brief introduction to features of open source systems for libraries. The reasoning and conditions for accepting Koha are discussed. A brief account of the implementation process and related experience of the open source ILMS are given. AeU library implemented the various modules of the system: cataloging, online public access (OPAC), circulation, patron management and acquisitions. The expanding influence and acceptance of OSS in libraries is here to stay. Malaysian libraries may need to look into the credible options and benefits of utilizing open source systems and harness this development in ILS

    e-Report Generator Supporting Communications and Fieldwork: A Practical Case of Electrical Network Expansion Projects

    Full text link
    In this piece of work we present a simple way to incorporate Geographical Information System tools that have been developed using open source software in order to help the different processes in the expansion of the electrical network. This is accomplished by developing a novel fieldwork tool that provides the user with automatically generated enriched e-reports that include information about every one of the involved private real estates in a specific project. These reports are an eco-friendly alternative to paper format, and can be accessed by clients using any kind of personal device with a minimal set of technical requirements

    C2MS: Dynamic Monitoring and Management of Cloud Infrastructures

    Full text link
    Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration; an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS); a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.Comment: Proceedings of the The 5th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2013), 8 page

    SIMBA: a web tool for managing bacterial genome assembly generated by Ion PGM sequencing technology

    Get PDF
    Background The evolution of Next-Generation Sequencing (NGS) has considerably reduced the cost per sequenced-base, allowing a significant rise of sequencing projects, mainly in prokaryotes. However, the range of available NGS platforms requires different strategies and software to correctly assemble genomes. Different strategies are necessary to properly complete an assembly project, in addition to the installation or modification of various software. This requires users to have significant expertise in these software and command line scripting experience on Unix platforms, besides possessing the basic expertise on methodologies and techniques for genome assembly. These difficulties often delay the complete genome assembly projects. Results In order to overcome this, we developed SIMBA (SImple Manager for Bacterial Assemblies), a freely available web tool that integrates several component tools for assembling and finishing bacterial genomes. SIMBA provides a friendly and intuitive user interface so bioinformaticians, even with low computational expertise, can work under a centralized administrative control system of assemblies managed by the assembly center head. SIMBA guides the users to execute assembly process through simple and interactive pages. SIMBA workflow was divided in three modules: (i) projects: allows a general vision of genome sequencing projects, in addition to data quality analysis and data format conversions; (ii) assemblies: allows de novo assemblies with the software Mira, Minia, Newbler and SPAdes, also assembly quality validations using QUAST software; and (iii) curation: presents methods to finishing assemblies through tools for scaffolding contigs and close gaps. We also presented a case study that validated the efficacy of SIMBA to manage bacterial assemblies projects sequenced using Ion Torrent PGM. Conclusion Besides to be a web tool for genome assembly, SIMBA is a complete genome assemblies project management system, which can be useful for managing of several projects in laboratories. SIMBA source code is available to download and install in local webservers at http://ufmg-simba.sourceforge.net

    Prospects and Challenges in R Package Development

    Get PDF
    R, a software package for statistical computing and graphics, has evolved into the lingua franca of (computational) statistics. One of the cornerstones of R's success is the decentralized and modularized way of creating software using a multi-tiered development model: The R Development Core Team provides the "base system", which delivers basic statistical functionality, and many other developers contribute code in the form of extensions in a standardized format via so-called packages. In order to be accessible by a broader audience, packages are made available via standardized source code repositories. To support such a loosely coupled development model, repositories should be able to verify that the provided packages meet certain formal quality criteria and "work": both relative to the development of the base R system as well as with other packages (interoperability). However, established quality assurance systems and collaborative infrastructures typically face several challenges, some of which we will discuss in this paper.Series: Research Report Series / Department of Statistics and Mathematic

    The detection and tracking of mine-water pollution from abandoned mines using electrical tomography

    Get PDF
    Increasing emphasis is being placed on the environmental and societal impact of mining, particularly in the EU, where the environmental impacts of abandoned mine sites (spoil heaps and tailings) are now subject to the legally binding Water Framework and Mine Waste Directives. Traditional sampling to monitor the impact of mining on surface waters and groundwater is laborious, expensive and often unrepresentative. In particular, sparse and infrequent borehole sampling may fail to capture the dynamic behaviour associated with important events such as flash flooding, mine-water break-out, and subsurface acid mine drainage. Current monitoring practice is therefore failing to provide the information needed to assess the socio-economic and environmental impact of mining on vulnerable eco-systems, or to give adequate early warning to allow preventative maintenance or containment. BGS has developed a tomographic imaging system known as ALERT ( Automated time-Lapse Electrical Resistivity Tomography) which allows the near real-time measurement of geoelectric properties "on demand", thereby giving early warning of potential threats to vulnerable water systems. Permanent in-situ geoelectric measurements are used to provide surrogate indicators of hydrochemical and hydrogeological properties. The ALERT survey concept uses electrode arrays, permanently buried in shallow trenches at the surface but these arrays could equally be deployed in mine entries or shafts or underground workings. This sensor network is then interrogated from the office by wireless telemetry (e.g: GSM, low-power radio, internet, and satellite) to provide volumetric images of the subsurface at regular intervals. Once installed, no manual intervention is required; data is transmitted automatically according to a pre-programmed schedule and for specific survey parameters, both of which may be varied remotely as conditions change (i.e: an adaptive sampling approach). The entire process from data capture to visualisation on the web-portal is seamless, with no manual intervention. Examples are given where ALERT has been installed and used to remotely monitor (i) seawater intrusion in a coastal aquifer (ii) domestic landfills and contaminated land and (iii) vulnerable earth embankments. The full potential of the ALERT concept for monitoring mine-waste has yet to be demonstrated. However we have used manual electrical tomography surveys to characterise mine-waste pollution at an abandoned metalliferous mine in the Central Wales orefield in the UK. Hydrogeochemical sampling confirms that electrical tomography can provide a reliable surrogate for the mapping and long-term monitoring of mine-water pollution

    PoliSave: Efficient Power Management of Campus PCs

    Get PDF
    In this paper we study the power consumption of networked devices in a large Campus network, focusing mainly on PC usage. We first define a methodology to monitor host power state, which we then apply to our Campus network. Results show that typically people refrain from turning off their PC during non-working hours so that more than 1500 PCs are always powered on, causing a large energy waste. We then design PoliSave, a simple web-based architecture which allows users to schedule power state of their PCs, avoiding the frustration of wasting long power-down and bootstrap times of today PCs. By exploiting already available technologies like Wake-On-Lan, Hibernation and Web services, PoliSave reduces the average PC uptime from 15.9h to 9.7h during working days, generating an energy saving of 0.6kW/h per PC per day, or a saving of more than 250,000 Euros per year considering our Campus Universit
    corecore