3,248 research outputs found
Semantic Storage: Overview and Assessment
The Semantic Web has a great deal of momentum behind it. The promise of a ābetter webā, where information is given well defined meaning and computers are better able to work with it has captured the imagination of a significant number of people, particularly in academia. Language standards such as RDF and OWL have appeared with remarkable speed, and development continues apace. To back up this development, there is a requirement for āsemantic databasesā, where this data can be conveniently stored, operated upon, and retrieved. These already exist in the form of triple stores, but do not yet fulfil all the requirements that may be made of them, particularly in the area of performing inference using OWL. This paper analyses the current stores along with forthcoming technology, and finds that it is unlikely that a combination of speed, scalability, and complex inferencing will be practical in the immediate future. It concludes by suggesting alternative development routes
Using Links to prototype a Database Wiki
Both relational databases and wikis have strengths that make them attractive for use in collaborative applications. In the last decade, database-backed Web applications have been used extensively to develop valuable shared biological references called curated databases. Databases offer many advantages such as scalability, query optimization and concurrency control, but are not easy to use and lack other features needed for collaboration. Wikis have become very popular for early-stage biocuration projects because they are easy to use, encourage sharing and collaboration, and provide built-in support for archiving, history-tracking and annotation. However, curation projects often outgrow the limited capabilities of wikis for structuring and efficiently querying data at scale, necessitating a painful phase transition to a database-backed Web application. We perceive a need for a new class of general-purpose system, which we call a Database Wiki, that combines flexible wiki-like support for collaboration with robust database-like capabilities for structuring and querying data. This paper presents DBWiki, a design prototype for such a system written in the Web programming language Links. We present the architecture, typical use, and wiki markup language design for DBWiki and discuss features of Links that provided unique advantages for rapid Web/database application prototyping
Logging and bookkeeping, Administrator's guide
Logging and Bookkeeping (LB for short) is a Grid service that keeps a short-term trace of Grid jobs as they are processed by individual Grid component
Low latency via redundancy
Low latency is critical for interactive networked applications. But while we
know how to scale systems to increase capacity, reducing latency --- especially
the tail of the latency distribution --- can be much more difficult. In this
paper, we argue that the use of redundancy is an effective way to convert extra
capacity into reduced latency. By initiating redundant operations across
diverse resources and using the first result which completes, redundancy
improves a system's latency even under exceptional conditions. We study the
tradeoff with added system utilization, characterizing the situations in which
replicating all tasks reduces mean latency. We then demonstrate empirically
that replicating all operations can result in significant mean and tail latency
reduction in real-world systems including DNS queries, database servers, and
packet forwarding within networks
The National Protective Inventory and Malta scheduled property register : Maltaās baseline for cultural heritage protection and more
Chapter 4Statutory heritage protection in the Maltese Islands first started in 1925 with the
publication of the Antiquities (Protection) Act, which was followed by the Antiquities
(Protection) List of 1932, amended in 1936 and 1939. Th e Antiquities (Protection) List
was essentially a āshopping listā of properties meriting protection however the list was
extremely basic and generic. The information provided varied depending on the familiarity
with the sites by the people compiling the list at the time. No site plans were published
with the list, indeed in certain cases a feature of a house in a street was the only feature
being protected within a single locality which made locating the site in question difficult
let alone its protection.
Apart from this, little was done however to protect heritage in Malta between 1939
and 1992 when the (then) Planning Authority was set up. Indeed, heritage protection by
MEPA commenced in 1994 with the identification of the most important archaeological
sites and areas, delineation of Urban Conservation Areas for the fortified cities around the
harbour and the identification of specific sites then under study through the Marsaxlokk
Bay and North Harbours Local Plans. Protection of individual sites and buildings
continued somewhat sporadically until 2006 when a thematic scheduling agenda was
drawn up. Although a few groups of thematic scheduling had been carried out by then,
most scheduling was undertaken depending on the studies being conducted at the time.
The NPI and MSPR, originally referred to as the List of Scheduled Property started
off as little more than a list similar to the Antiquities List with the addition of pertinent
information such as the proper address, images, a site plan denoting the extent and site
curtilage if necessary, and other information required for planning purposes. In the late
2000s, the need was felt for better organisation of the information available and with it the
better organisation of the NPI and creation of the MSPR.peer-reviewe
Connecting to the Data-Intensive Future of Scientific Research
In recent years enormous amounts of digital data have become available to scientific researchers. This flood of data is transforming the way scientific research is conducted. Independent researchers are in serious need of tools that will help them managed and preserve the large volumes of data being created in their own labs. Data management will not only help researchers get or keep a handle on their data, it will also help them stay relevant and competitive in increasingly strict funding environments. This paper provides summaries of best practices and case studies of data management that relate to three common data management challenges ā multitudinous sensor data, short-term data loss, and digital images. We use a combination of open system solutions such as HydroServer Lite, an open system database for time series data, and proprietary tools such as Adobe Photoshop Lightroom. Each lab may require its own unique suite of tools, but these are becoming numerous and readily available, making it easier to archive and share data with collaborators and to discover and integrate published data sets
On Constructing Persistent Identifiers with Persistent Resolution Targets
Persistent Identifiers (PID) are the foundation referencing digital assets in
scientific publications, books, and digital repositories. In its realization,
PIDs contain metadata and resolving targets in form of URLs that point to data
sets located on the network. In contrast to PIDs, the target URLs are typically
changing over time; thus, PIDs need continuous maintenance -- an effort that is
increasing tremendously with the advancement of e-Science and the advent of the
Internet-of-Things (IoT). Nowadays, billions of sensors and data sets are
subject of PID assignment. This paper presents a new approach of embedding
location independent targets into PIDs that allows the creation of
maintenance-free PIDs using content-centric network technology and overlay
networks. For proving the validity of the presented approach, the Handle PID
System is used in conjunction with Magnet Link access information encoding,
state-of-the-art decentralized data distribution with BitTorrent, and Named
Data Networking (NDN) as location-independent data access technology for
networks. Contrasting existing approaches, no green-field implementation of PID
or major modifications of the Handle System is required to enable
location-independent data dissemination with maintenance-free PIDs.Comment: Published IEEE paper of the FedCSIS 2016 (SoFAST-WS'16) conference,
11.-14. September 2016, Gdansk, Poland. Also available online:
http://ieeexplore.ieee.org/document/7733372
The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain
The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals
- ā¦