7,082 research outputs found
Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies
Grid is an infrastructure that involves the integrated and collaborative use
of computers, networks, databases and scientific instruments owned and managed
by multiple organizations. Grid applications often involve large amounts of
data and/or computing resources that require secure resource sharing across
organizational boundaries. This makes Grid application management and
deployment a complex undertaking. Grid middlewares provide users with seamless
computing ability and uniform access to resources in the heterogeneous Grid
environment. Several software toolkits and systems have been developed, most of
which are results of academic research projects, all over the world. This
chapter will focus on four of these middlewares--UNICORE, Globus, Legion and
Gridbus. It also presents our implementation of a resource broker for UNICORE
as this functionality was not supported in it. A comparison of these systems on
the basis of the architecture, implementation model and several other features
is included.Comment: 19 pages, 10 figure
Mediated data integration and transformation for web service-based software architectures
Service-oriented architecture using XML-based web services has been widely accepted by many organisations as the standard infrastructure to integrate heterogeneous and autonomous data sources. As a result, many Web service providers are built up on top of the data sources to share the data by supporting provided and required interfaces and methods of data access in a unified manner. In the context of data integration, problems arise when Web services are assembled to deliver an integrated view of data, adaptable to the specific needs of individual clients and providers. Traditional approaches of data integration and transformation are not suitable to automate the construction of connectors dedicated to connect selected Web services to render integrated and tailored views of data. We propose a declarative approach that addresses the oftenneglected data integration and adaptivity aspects of serviceoriented
architecture
An information retrieval approach to ontology mapping
In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e., using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. \ud
\u
Combined aptamer and transcriptome sequencing of single cells.
The transcriptome and proteome encode distinct information that is important for characterizing heterogeneous biological systems. We demonstrate a method to simultaneously characterize the transcriptomes and proteomes of single cells at high throughput using aptamer probes and droplet-based single cell sequencing. With our method, we differentiate distinct cell types based on aptamer surface binding and gene expression patterns. Aptamers provide advantages over antibodies for single cell protein characterization, including rapid, in vitro, and high-purity generation via SELEX, and the ability to amplify and detect them with PCR and sequencing
Behavior change interventions: the potential of ontologies for advancing science and practice
A central goal of behavioral medicine is the creation of evidence-based interventions for promoting behavior change. Scientific knowledge about behavior change could be more effectively accumulated using "ontologies." In information science, an ontology is a systematic method for articulating a "controlled vocabulary" of agreed-upon terms and their inter-relationships. It involves three core elements: (1) a controlled vocabulary specifying and defining existing classes; (2) specification of the inter-relationships between classes; and (3) codification in a computer-readable format to enable knowledge generation, organization, reuse, integration, and analysis. This paper introduces ontologies, provides a review of current efforts to create ontologies related to behavior change interventions and suggests future work. This paper was written by behavioral medicine and information science experts and was developed in partnership between the Society of Behavioral Medicine's Technology Special Interest Group (SIG) and the Theories and Techniques of Behavior Change Interventions SIG. In recent years significant progress has been made in the foundational work needed to develop ontologies of behavior change. Ontologies of behavior change could facilitate a transformation of behavioral science from a field in which data from different experiments are siloed into one in which data across experiments could be compared and/or integrated. This could facilitate new approaches to hypothesis generation and knowledge discovery in behavioral science
SPEIR: Scottish Portals for Education, Information and Research. Final Project Report: Elements and Future Development Requirements of a Common Information Environment for Scotland
The SPEIR (Scottish Portals for Education, Information and Research) project was funded by the Scottish Library and Information Council (SLIC). It ran from February 2003 to September 2004, slightly longer than the 18 months originally scheduled and was managed by the Centre for Digital Library Research (CDLR). With SLIC's agreement, community stakeholders were represented in the project by the Confederation of Scottish Mini-Cooperatives (CoSMiC), an organisation whose members include SLIC, the National Library of Scotland (NLS), the Scottish Further Education Unit (SFEU), the Scottish Confederation of University and Research Libraries (SCURL), regional cooperatives such as the Ayrshire Libraries Forum (ALF)1, and representatives from the Museums and Archives communities in Scotland. Aims; A Common Information Environment For Scotland The aims of the project were to: o Conduct basic research into the distributed information infrastructure requirements of the Scottish Cultural Portal pilot and the public library CAIRNS integration proposal; o Develop associated pilot facilities by enhancing existing facilities or developing new ones; o Ensure that both infrastructure proposals and pilot facilities were sufficiently generic to be utilised in support of other portals developed by the Scottish information community; o Ensure the interoperability of infrastructural elements beyond Scotland through adherence to established or developing national and international standards. Since the Scottish information landscape is taken by CoSMiC members to encompass relevant activities in Archives, Libraries, Museums, and related domains, the project was, in essence, concerned with identifying, researching, and developing the elements of an internationally interoperable common information environment for Scotland, and of determining the best path for future progress
A Question of Empowerment: Information Technology and Civic Engagement in New Haven, Connecticut
Extravagant claims have been made for the capacity of IT (information technology) to empower citizens and to enhance the capacity of civic organizations. This study of IT use by organizations and agencies in New Haven, Connecticut, 1998-2004, tests these claims, finding that the use of IT by nonprofits is selective, tending to serve agencies patronized by community elites rather than populations in need. In addition, the study finds that single interest groups are far more effective in using IT than more diverse civic and neighborhood groups.This publication is Hauser Center Working Paper No. 30. The Hauser Center Working Paper Series was launched during the summer of 2000. The Series enables the Hauser Center to share with a broad audience important works-in-progress written by Hauser Center scholars and researchers
Towards A Taxonomy of Emerging Topics in Open Government Data: A Bibliometric Mapping Approach
The purpose of this paper is to capture the emerging research topics in Open Government Data (OGD) through a bibliometric mapping approach. Previous OGD research has covered the evolution of the discipline with the application of bibliometric mapping tools. However, none of these studies have extended the bibliometric mapping approach for taxonomy building. Realizing this potential, we used a bibliometric tool to perform keyword analysis as a foundation for taxonomy construction. A set of keyword clusters was constructed, and qualitative analysis software was used for taxonomy creation. Emerging topics were identified in a taxonomy form. This study contributes towards the development of an OGD taxonomy. This study contributes to the procedural realignment of a past study by incorporating taxonomy building elements for taxonomy creation. These contributions are significant because there is insufficient taxonomy research in the OGD discipline. The taxonomy building procedures extended in this study are applicable to other fields
StreamOnTheFly: a Peer-to-peer network for radio stations and podCasters
The StreamOnTheFly network demonstrates new
ways of management and personalisation technologies
for audio. The architecture is based on a decentralized
network of software components using automatic
metadata replication in a peer-to-peer manner. The
network also promotes a new common metadata
schema and content exchange format. Content reuse
and content exchange is made possible by
StreamOnTheFly in several use cases
Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking
The purpose of the paper is to propose models to reduce the semantic
complexity in heterogeneous DLs. The aim is to introduce value-added services
(treatment of term vagueness and document re-ranking) that gain a certain
quality in DLs if they are combined with heterogeneity components established
in the project "Competence Center Modeling and Treatment of Semantic
Heterogeneity". Empirical observations show that freely formulated user terms
and terms from controlled vocabularies are often not the same or match just by
coincidence. Therefore, a value-added service will be developed which rephrases
the natural language searcher terms into suggestions from the controlled
vocabulary, the Search Term Recommender (STR). Two methods, which are derived
from scientometrics and network analysis, will be implemented with the
objective to re-rank result sets by the following structural properties: the
ranking of the results by core journals (so-called Bradfordizing) and ranking
by centrality of authors in co-authorship networks.Comment: 12 pages, 4 figure
- âŠ