440 research outputs found
Review of the environmental and organisational implications of cloud computing: final report.
Cloud computing – where elastic computing resources are delivered over the Internet by external service providers – is generating significant interest within HE and FE. In the cloud computing business model, organisations or individuals contract with a cloud computing service provider on a pay-per-use basis to access data centres, application software or web services from any location. This provides an elasticity of provision which the customer can scale up or down to meet demand. This form of utility computing potentially opens up a new paradigm in the provision of IT to support administrative and educational functions within HE and FE. Further, the economies of scale and increasingly energy efficient data centre technologies which underpin cloud services means that cloud solutions may also have a positive impact on carbon footprints. In response to the growing interest in cloud computing within UK HE and FE, JISC commissioned the University of Strathclyde to undertake a Review of the Environmental and Organisational Implications of Cloud Computing in Higher and Further Education [19]
Survey and Analysis of Production Distributed Computing Infrastructures
This report has two objectives. First, we describe a set of the production
distributed infrastructures currently available, so that the reader has a basic
understanding of them. This includes explaining why each infrastructure was
created and made available and how it has succeeded and failed. The set is not
complete, but we believe it is representative.
Second, we describe the infrastructures in terms of their use, which is a
combination of how they were designed to be used and how users have found ways
to use them. Applications are often designed and created with specific
infrastructures in mind, with both an appreciation of the existing capabilities
provided by those infrastructures and an anticipation of their future
capabilities. Here, the infrastructures we discuss were often designed and
created with specific applications in mind, or at least specific types of
applications. The reader should understand how the interplay between the
infrastructure providers and the users leads to such usages, which we call
usage modalities. These usage modalities are really abstractions that exist
between the infrastructures and the applications; they influence the
infrastructures by representing the applications, and they influence the ap-
plications by representing the infrastructures
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information
Requirements elicitation requires extensive knowledge and deep understanding
of the problem domain where the final system will be situated. However, in many
software development projects, analysts are required to elicit the requirements
from an unfamiliar domain, which often causes communication barriers between
analysts and stakeholders. In this paper, we propose a requirements ELICitation
Aid tool (ELICA) to help analysts better understand the target application
domain by dynamic extraction and labeling of requirements-relevant knowledge.
To extract the relevant terms, we leverage the flexibility and power of
Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural
language processing tasks. In addition to the information conveyed through
text, ELICA captures and processes non-linguistic information about the
intention of speakers such as their confidence level, analytical tone, and
emotions. The extracted information is made available to the analysts as a set
of labeled snippets with highlighted relevant terms which can also be exported
as an artifact of the Requirements Engineering (RE) process. The application
and usefulness of ELICA are demonstrated through a case study. This study shows
how pre-existing relevant information about the application domain and the
information captured during an elicitation meeting, such as the conversation
and stakeholders' intentions, can be captured and used to support analysts
achieving their tasks.Comment: 2018 IEEE 26th International Requirements Engineering Conference
Workshop
Data hosting infrastructure for primary biodiversity data
© The Author(s), 2011. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in BMC Bioinformatics 12 Suppl. 15 (2011): S5, doi:10.1186/1471-2105-12-S15-S5.Today, an unprecedented volume of primary biodiversity data are being generated worldwide, yet significant amounts of these data have been and will continue to be lost after the conclusion of the projects tasked with collecting them. To get the most value out of these data it is imperative to seek a solution whereby these data are rescued, archived and made available to the biodiversity community. To this end, the biodiversity informatics community requires investment in processes and infrastructure to mitigate data loss and provide solutions for long-term hosting and sharing of biodiversity data. We review the current state of biodiversity data hosting and investigate the technological and sociological barriers to proper data management. We further explore the rescuing and re-hosting of legacy data, the state of existing toolsets and propose a future direction for the development of new discovery tools. We also explore the role of data standards and licensing in the context of data hosting and preservation. We provide five recommendations for the biodiversity community that will foster better data preservation and access: (1) encourage the community's use of data standards, (2) promote the public domain licensing of data, (3) establish a community of those involved in data hosting and archival, (4) establish hosting centers for biodiversity data, and (5) develop tools for data discovery. The community's adoption of standards and development of tools to enable data discovery is essential to sustainable data preservation. Furthermore, the increased adoption of open content licensing, the establishment of data hosting infrastructure and the creation of a data hosting and archiving community are all necessary steps towards the community ensuring that data archival policies become standardized
Foundations of efficient virtual appliance based service deployments
The use of virtual appliances could provide a flexible solution to services
deployment. However, these solutions suffer from several disadvantages: (i)
the slow deployment time of services in virtual machines, and (ii) virtual appliances crafted by developers tend to be inefficient for deployment purposes.
Researchers target problem (i) by advancing virtualization technologies or
by introducing virtual appliance caches on the virtual machine monitor hosts.
Others aim at problem (ii) by providing solutions for virtual appliance construction, however these solutions require deep knowledge about the service
dependencies and its deployment process.
This dissertation aids problem (i) with a virtual appliance distribution
technique that first identifies appliance parts and their internal dependencies. Then based on service demand it efficiently distributes the identified
parts to virtual appliance repositories. Problem (ii) is targeted with the Automated Virtual appliance creation Service (AVS) that can extract and publish
an already deployed service by the developer. This recently acquired virtual
appliance is optimized for service deployment time with the proposed
virtual appliance optimization facility that utilizes active fault injection to
remove the non-functional parts of the appliance. Finally, the investigation
of appliance distribution and optimization techniques resulted the definition
of the minimal manageable virtual appliance that is capable of updating and
configuring its executor virtual machine.
The deployment time reduction capabilities of the proposed techniques
were measured with several services provided in virtual appliances on three
cloud infrastructures. The appliance creation capabilities of the AVS are compared to the already available virtual appliances offered by the various online
appliance repositories. The results reveal that the introduced techniques
significantly decrease the deployment time of virtual appliance based deployment systems. As a result these techniques alleviated one of the major
obstacles before virtual appliance based deployment systems
Getting started with cloud computing : a LITA guide
"A one-stop guide for implementing cloud computing. Cloud computing can save your library time and money by enabling convenient, on-demand network access to resources like servers and applications. Libraries that take advantage of the cloud have fewer IT headaches because data centers provide continuous updates and mobility that standard computing cannot easily provide, which means less time and energy spent on software, and more time and energy to devote to the library's day to day mission and services. Here, leading LITA experts demystify language, deflate hype, and provide library-specific examples of real-world success you can emulate to guarantee efficiency and savings. Among several features, this book helps you select data access and file sharing services, build digital repositories, and utilize other cloud computing applications in your library. Together, you and the cloud can save time and money, and build the information destination your patrons will love."--Publisher's website.Edward M. Corrado, Heather Lea Moulaison, Editors ; with a Foreword by Roy Tennant.Perspectives on cloud computing in libraries / Heather Lea Moulaison and Edward M. Corrado -- Understanding the cloud : an introduction to the cloud / Rosalyn Metz -- Cloud computing : pros and cons / H. Frank Cervone -- What could computing means for libraries / Erik Mitchell -- Head in the clouds? A librarian/vendor perspective on cloud computing / Carl Grant -- Cloud computing for LIS education / Christinger R. Tomer and Susan W. Alman -- Library discovery services : from the ground to the cloud / Marshall Breeding -- Koha in the cloud / Christopher R. Nighswonger and Nicole C. Engard -- Leveraging OCLC cooperative library data in the cloud via web services / Karen A. Coombs -- Building push-button repositories in the cloud with dspace and amazon web services -- Untethering considerations : selecting a cloud-based data access and file-sharing solution / Heidi M. Nickisch Duggan and Michelle Frisque -- Sharepoint strategies for establishing a powerful library intranet / Jennifer Diffin and Dennis Nangle -- Using windows home server and amazon s3 to back up high-resolution digital objects to the cloud / Edward Iglesias -- Keeping your data on the ground when putting your (lib)guides in the cloud / Karen A. Reiman-Sendi, Kenneth J. Varnum, and Albert A. Bertram -- Parting the clouds : use of dropbox by embedded librarians / Caitlin A. Bagley -- From the cloud, a clear solution : how one academic library uses google calendar / Anne Leonard -- Integrating google forms into reference and instruction / Robin Elizabeth Miller -- Ning, fostering conversations in the cloud / Leland R. Deeds, Cindy Kissel-Ito, and Ann Thomas Knox -- Not every cloud has a silver lining : using a cloud application may not always be the best solution / Ann Whitney Gleason -- Speak up! using voicethread to encourage participation and collaboration in library instruction / Jennifer Ditkoff and Kara Young.Includes bibliographical references and index
- …