61,157 research outputs found
Open source in libraries: Implementation of an open source ILMS at Asia e-University library
Open source systems for libraries have improved significantly to gain the confidence of librarians. The main focus of the paper is to describe the selection process and criteria that led to the implementation of Koha the first open source Integrated Library Management System at the AeU Library. A study was made based on the set criteria used to compare and contrast with the more popular propriety library management systems. The paper presents the findings of the study which led to the selection of Koha, and a brief introduction to features of open source systems for libraries. The reasoning and conditions for accepting Koha are discussed. A brief account of the implementation process and related experience of the open source ILMS are given. AeU library implemented the various modules of the system: cataloging, online public access (OPAC), circulation, patron management and acquisitions. The expanding influence and acceptance of OSS in libraries is here to stay. Malaysian libraries may need to look into the credible options and benefits of utilizing open source systems and harness this development in ILS
e-Report Generator Supporting Communications and Fieldwork: A Practical Case of Electrical Network Expansion Projects
In this piece of work we present a simple way to incorporate Geographical Information System tools that have been developed using open source software in order to help the different processes in the expansion of the electrical network. This is accomplished by developing a novel fieldwork tool that provides the user with automatically generated enriched e-reports that include information about every one of the involved private real estates in a specific project. These reports are an eco-friendly alternative to paper format, and can be accessed by clients using any kind of personal device with a minimal set of technical requirements
C2MS: Dynamic Monitoring and Management of Cloud Infrastructures
Server clustering is a common design principle employed by many organisations
who require high availability, scalability and easier management of their
infrastructure. Servers are typically clustered according to the service they
provide whether it be the application(s) installed, the role of the server or
server accessibility for example. In order to optimize performance, manage load
and maintain availability, servers may migrate from one cluster group to
another making it difficult for server monitoring tools to continuously monitor
these dynamically changing groups. Server monitoring tools are usually
statically configured and with any change of group membership requires manual
reconfiguration; an unreasonable task to undertake on large-scale cloud
infrastructures.
In this paper we present the Cloudlet Control and Management System (C2MS); a
system for monitoring and controlling dynamic groups of physical or virtual
servers within cloud infrastructures. The C2MS extends Ganglia - an open source
scalable system performance monitoring tool - by allowing system administrators
to define, monitor and modify server groups without the need for server
reconfiguration. In turn administrators can easily monitor group and individual
server metrics on large-scale dynamic cloud infrastructures where roles of
servers may change frequently. Furthermore, we complement group monitoring with
a control element allowing administrator-specified actions to be performed over
servers within service groups as well as introduce further customized
monitoring metrics. This paper outlines the design, implementation and
evaluation of the C2MS.Comment: Proceedings of the The 5th IEEE International Conference on Cloud
Computing Technology and Science (CloudCom 2013), 8 page
SIMBA: a web tool for managing bacterial genome assembly generated by Ion PGM sequencing technology
Background The evolution of Next-Generation Sequencing (NGS) has considerably reduced the cost per sequenced-base, allowing a significant rise of sequencing projects, mainly in prokaryotes. However, the range of available NGS platforms requires different strategies and software to correctly assemble genomes. Different strategies are necessary to properly complete an assembly project, in addition to the installation or modification of various software. This requires users to have significant expertise in these software and command line scripting experience on Unix platforms, besides possessing the basic expertise on methodologies and techniques for genome assembly. These difficulties often delay the complete genome assembly projects. Results In order to overcome this, we developed SIMBA (SImple Manager for Bacterial Assemblies), a freely available web tool that integrates several component tools for assembling and finishing bacterial genomes. SIMBA provides a friendly and intuitive user interface so bioinformaticians, even with low computational expertise, can work under a centralized administrative control system of assemblies managed by the assembly center head. SIMBA guides the users to execute assembly process through simple and interactive pages. SIMBA workflow was divided in three modules: (i) projects: allows a general vision of genome sequencing projects, in addition to data quality analysis and data format conversions; (ii) assemblies: allows de novo assemblies with the software Mira, Minia, Newbler and SPAdes, also assembly quality validations using QUAST software; and (iii) curation: presents methods to finishing assemblies through tools for scaffolding contigs and close gaps. We also presented a case study that validated the efficacy of SIMBA to manage bacterial assemblies projects sequenced using Ion Torrent PGM. Conclusion Besides to be a web tool for genome assembly, SIMBA is a complete genome assemblies project management system, which can be useful for managing of several projects in laboratories. SIMBA source code is available to download and install in local webservers at http://ufmg-simba.sourceforge.net
Recommended from our members
UC Berkeley's Cory Hall: Evaluation of Challenges and Potential Applications of Building-to-Grid Implementation
From September 2009 through June 2010, a team of researchers developed, installed, and tested instrumentation on the energy flows in Cory Hall on the UC Berkeley campus to create a Building-to-Grid testbed. The UC Berkeley team was headed by Professor David Culler, and assisted by members from EnerNex, Lawrence Berkeley National Laboratory, California State University Sacramento, and the California Institute for Energy & Environment. While the Berkeley team mapped the load tree of the building, EnerNex researched types of meters, submeters, monitors, and sensors to be used (Task 1). Next the UC Berkeley team analyzed building needs and designed the network of metering components and data storage/visualization software (Task 2). After meeting with vendors in January, the UCB team procured and installed the components starting in late March (Task 3). Next, the UCB team tested and demonstrated the system (Task 4). Meanwhile, the CSUS team documented the methodology and steps necessary to implement a testbed (Task 5) and Harold Galicer developed a roadmap for the CSUS Smart Grid Center with results from the testbed (Task 5a) and evaluated the Cory Hall implementation process (Task 5b). The CSUS team also worked with local utilities to develop an approach to the energy information communication link between buildings and the utility (Task 6). The UC Berkeley team then prepared a roadmap to outline necessary technology development for Building-to-Grid, and presented the results of the project in early July (Task 7). Finally, CIEE evaluated the implementation, noting challenges and potential applications of Building-to-Grid (Task 8). These deliverables are available at the i4Energy site: http://i4energy.org/
Prospects and Challenges in R Package Development
R, a software package for statistical computing and graphics, has evolved into the lingua franca of (computational) statistics. One of the cornerstones of R's success is the decentralized and modularized way of creating software using a multi-tiered development model: The R Development Core Team provides the "base system", which delivers basic statistical functionality, and many other developers contribute code in the form of extensions in a standardized format via so-called packages. In order to be accessible by a broader audience, packages are made available via standardized source code repositories. To support such a loosely coupled development model, repositories should be able to verify that the provided packages meet certain formal quality criteria and "work": both relative to the development of the base R system as well as with other packages (interoperability). However, established quality assurance systems and collaborative infrastructures typically face several challenges, some of which we will discuss in this paper.Series: Research Report Series / Department of Statistics and Mathematic
The detection and tracking of mine-water pollution from abandoned mines using electrical tomography
Increasing emphasis is being placed on the environmental and societal impact of mining, particularly in the EU, where the environmental impacts of abandoned mine sites (spoil heaps and tailings) are now subject to the legally binding Water Framework and Mine Waste Directives.
Traditional sampling to monitor the impact of mining on surface waters and groundwater is laborious, expensive and often unrepresentative. In particular, sparse and infrequent borehole sampling may fail to capture the dynamic behaviour associated with important events such as flash flooding, mine-water break-out, and subsurface acid mine drainage. Current monitoring practice is therefore failing to provide the information needed to assess the socio-economic and environmental impact of mining on vulnerable eco-systems, or to give adequate early warning to allow preventative maintenance or containment. BGS has developed a tomographic imaging system known as ALERT ( Automated time-Lapse Electrical Resistivity Tomography) which allows the near real-time measurement of geoelectric properties "on demand", thereby giving early warning of potential threats to vulnerable water systems. Permanent in-situ geoelectric measurements are used to provide surrogate indicators of hydrochemical and hydrogeological properties. The ALERT survey concept uses electrode arrays, permanently buried in shallow trenches at the surface but these arrays could equally be deployed in mine entries or shafts or underground workings. This sensor network is then interrogated from the office by wireless telemetry (e.g: GSM, low-power radio, internet, and satellite) to provide volumetric images of the subsurface at regular intervals. Once installed, no manual intervention is required; data is transmitted automatically according to a pre-programmed schedule and for specific survey parameters, both of which may be varied remotely as conditions change (i.e: an adaptive sampling approach). The entire process from data capture to visualisation on the web-portal is seamless, with no manual intervention.
Examples are given where ALERT has been installed and used to remotely monitor (i) seawater intrusion in a coastal aquifer (ii) domestic landfills and contaminated land and (iii) vulnerable earth embankments. The full potential of the ALERT concept for monitoring mine-waste has yet to be demonstrated. However we have used manual electrical tomography surveys to characterise mine-waste pollution at an abandoned metalliferous mine in the Central Wales orefield in the UK. Hydrogeochemical sampling confirms that electrical tomography can provide a reliable surrogate for the mapping and long-term monitoring of mine-water pollution
PoliSave: Efficient Power Management of Campus PCs
In this paper we study the power consumption of networked devices in a large Campus network, focusing mainly on PC usage. We first define a methodology to monitor host power state, which we then apply to our Campus network. Results show that typically people refrain from turning off their PC during non-working hours so that more than 1500 PCs are always powered on, causing a large energy waste. We then design PoliSave, a simple web-based architecture which allows users to schedule power state of their PCs, avoiding the frustration of wasting long power-down and bootstrap times of today PCs. By exploiting already available technologies like Wake-On-Lan, Hibernation and Web services, PoliSave reduces the average PC uptime from 15.9h to 9.7h during working days, generating an energy saving of 0.6kW/h per PC per day, or a saving of more than 250,000 Euros per year considering our Campus Universit
- …