33,543 research outputs found
A Molecular Biology Database Digest
Computational Biology or Bioinformatics has been defined as the application of mathematical
and Computer Science methods to solving problems in Molecular Biology that require large scale
data, computation, and analysis [18]. As expected, Molecular Biology databases play an essential
role in Computational Biology research and development. This paper introduces into current
Molecular Biology databases, stressing data modeling, data acquisition, data retrieval, and the
integration of Molecular Biology data from different sources. This paper is primarily intended
for an audience of computer scientists with a limited background in Biology
Querying Large Physics Data Sets Over an Information Grid
Optimising use of the Web (WWW) for LHC data analysis is a complex problem
and illustrates the challenges arising from the integration of and computation
across massive amounts of information distributed worldwide. Finding the right
piece of information can, at times, be extremely time-consuming, if not
impossible. So-called Grids have been proposed to facilitate LHC computing and
many groups have embarked on studies of data replication, data migration and
networking philosophies. Other aspects such as the role of 'middleware' for
Grids are emerging as requiring research. This paper positions the need for
appropriate middleware that enables users to resolve physics queries across
massive data sets. It identifies the role of meta-data for query resolution and
the importance of Information Grids for high-energy physics analysis rather
than just Computational or Data Grids. This paper identifies software that is
being implemented at CERN to enable the querying of very large collaborating
HEP data-sets, initially being employed for the construction of CMS detectors.Comment: 4 pages, 3 figure
Open-source digital technologies for low-cost monitoring of historical constructions
This paper shows new possibilities of using novel, open-source, low-cost platforms for the structural health monitoring of heritage structures. The objective of the study is to present an assessment of increasingly available open-source digital modeling and fabrication technologies in order to identify the suitable counterparts of the typical components of a continuous static monitoring system for a historical construction. The results of the research include a simple case-study, which is presented with low-cost, open-source, calibrated components, as well as an assessment of different alternatives for deploying basic structural health monitoring arrangements. The results of the research show the great potential of these existing technologies that may help to promote a widespread and cost-efficient monitoring of the built cultural heritage. Such scenario may contribute to the onset of commonplace digital records of historical constructions in an open-source, versatile and reliable fashion.Peer ReviewedPostprint (author's final draft
Logic-Based Specification Languages for Intelligent Software Agents
The research field of Agent-Oriented Software Engineering (AOSE) aims to find
abstractions, languages, methodologies and toolkits for modeling, verifying,
validating and prototyping complex applications conceptualized as Multiagent
Systems (MASs). A very lively research sub-field studies how formal methods can
be used for AOSE. This paper presents a detailed survey of six logic-based
executable agent specification languages that have been chosen for their
potential to be integrated in our ARPEGGIO project, an open framework for
specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the
IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each
executable language, the logic foundations are described and an example of use
is shown. A comparison of the six languages and a survey of similar approaches
complete the paper, together with considerations of the advantages of using
logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal
"Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe
Editor-in-Chie
Predicting Intermediate Storage Performance for Workflow Applications
Configuring a storage system to better serve an application is a challenging
task complicated by a multidimensional, discrete configuration space and the
high cost of space exploration (e.g., by running the application with different
storage configurations). To enable selecting the best configuration in a
reasonable time, we design an end-to-end performance prediction mechanism that
estimates the turn-around time of an application using storage system under a
given configuration. This approach focuses on a generic object-based storage
system design, supports exploring the impact of optimizations targeting
workflow applications (e.g., various data placement schemes) in addition to
other, more traditional, configuration knobs (e.g., stripe size or replication
level), and models the system operation at data-chunk and control message
level.
This paper presents our experience to date with designing and using this
prediction mechanism. We evaluate this mechanism using micro- as well as
synthetic benchmarks mimicking real workflow applications, and a real
application.. A preliminary evaluation shows that we are on a good track to
meet our objectives: it can scale to model a workflow application run on an
entire cluster while offering an over 200x speedup factor (normalized by
resource) compared to running the actual application, and can achieve, in the
limited number of scenarios we study, a prediction accuracy that enables
identifying the best storage system configuration
Towards a Distributed Quantum Computing Ecosystem
The Quantum Internet, by enabling quantum communications among remote quantum
nodes, is a network capable of supporting functionalities with no direct
counterpart in the classical world. Indeed, with the network and communications
functionalities provided by the Quantum Internet, remote quantum devices can
communicate and cooperate for solving challenging computational tasks by
adopting a distributed computing approach. The aim of this paper is to provide
the reader with an overview about the main challenges and open problems arising
with the design of a Distributed Quantum Computing ecosystem. For this, we
provide a survey, following a bottom-up approach, from a communications
engineering perspective. We start by introducing the Quantum Internet as the
fundamental underlying infrastructure of the Distributed Quantum Computing
ecosystem. Then we go further, by elaborating on a high-level system
abstraction of the Distributed Quantum Computing ecosystem. Such an abstraction
is described through a set of logical layers. Thereby, we clarify dependencies
among the aforementioned layers and, at the same time, a road-map emerges
- …