95,899 research outputs found
Managing semantic Grid metadata in S-OGSA
Grid resources such as data, services, and equipment, are increasingly being annotated with descriptive metadata that facilitates their discovery and their use in the context of Virtual Organizations (VO). Making such growing body of metadata explicit and available to Grid services is key to the success of the VO paradigm. In this paper we present a model and management architecture for Semantic Bindings, i.e., firstclass Grid entities that encapsulate metadata on the Grid and make it available through predictable access patterns. The model is at the core of the S-OGSA reference architecture for the Semantic Grid
The ReSIST Resilience Knowledge Base
We describe a prototype knowledge base that uses semantic web technologies to provide a service for querying a large and expanding collection of public data about resilience, dependability and security. We report progress and identify opportunities to support resilience-explicit computing by developing metadata-based descriptions of resilience mechanisms that can be used to support design time and, potentially, run-time decision making
A Framework for collaborative writing with recording and post-meeting retrieval capabilities
From a HCI perspective, elucidating and supporting the context in which collaboration takes place is key to implementing successful collaborative systems. Synchronous collaborative writing usually takes place in contexts involving a “meeting” of some sort. Collaborative writing meetings can be face-to-face or, increasingly, remote Internet-based meetings. The latter presents software developers with the possibility of incorporating multimedia recording and information retrieval capabilities into the collaborative environment. The collaborative writing that ensues can be seen as an activity encompassing asynchronous as well as synchronous aspects. In order for revisions, information retrieval and other forms of post-meeting, asynchronous work to be effectively supported, the synchronous collaborative editor must be able to appropriately detect and record meeting metadata. This paper presents a collaborative editor that supports recording of user actions and explicit metadata production. Design and technical implications of introducing such capabilities are discussed with respect to document segmentation, consistency control, and awareness mechanisms
Addressing the tacit knowledge of a digital library system
Recent surveys, about the Linked Data initiatives in library organizations, report the experimental nature of related projects and the difficulty in re-using data to provide improvements of library services. This paper presents an approach for managing data and its "tacit" organizational knowledge, as the originating data context, improving the interpretation of data meaning. By analyzing a Digital Libray system, we prototyped a method for turning data management into a "semantic data management", where local system knowledge is managed as a data, and natively foreseen as a Linked Data. Semantic data management aims to curates the correct consumers' understanding of Linked Datasets, driving to a proper re-use
POOL File Catalog, Collection and Metadata Components
The POOL project is the common persistency framework for the LHC experiments
to store petabytes of experiment data and metadata in a distributed and grid
enabled way. POOL is a hybrid event store consisting of a data streaming layer
and a relational layer. This paper describes the design of file catalog,
collection and metadata components which are not part of the data streaming
layer of POOL and outlines how POOL aims to provide transparent and efficient
data access for a wide range of environments and use cases - ranging from a
large production site down to a single disconnected laptops. The file catalog
is the central POOL component translating logical data references to physical
data files in a grid environment. POOL collections with their associated
metadata provide an abstract way of accessing experiment data via their logical
grouping into sets of related data objects.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, 1 eps figure, PSN MOKT00
Heuristic usability evaluation on games: a modular approach
Heuristic evaluation is the preferred method to assess usability in games when experts conduct this
evaluation. Many heuristics guidelines have been proposed attending to specificities of games but
they only focus on specific subsets of games or platforms. In fact, to date the most used guideline to
evaluate games usability is still Nielsen’s proposal, which is focused on generic software. As a
result, most evaluations do not cover important aspects in games such as mobility, multiplayer
interactions, enjoyability and playability, etc. To promote the usage of new heuristics adapted to
different game and platform aspects we propose a modular approach based on the classification of
existing game heuristics using metadata and a tool, MUSE (Meta-heUristics uSability Evaluation
tool) for games, which allows a rebuild of heuristic guidelines based on metadata selection in order
to obtain a customized list for every real evaluation case. The usage of these new rebuilt heuristic
guidelines allows an explicit attendance to a wide range of usability aspects in games and a better
detection of usability issues. We preliminarily evaluate MUSE with an analysis of two different
games, using both the Nielsen’s heuristics and the customized heuristic lists generated by our tool.Unión Europea PI055-15/E0
DiVM: Model Checking with LLVM and Graph Memory
In this paper, we introduce the concept of a virtual machine with
graph-organised memory as a versatile backend for both explicit-state and
abstraction-driven verification of software. Our virtual machine uses the LLVM
IR as its instruction set, enriched with a small set of hypercalls. We show
that the provided hypercalls are sufficient to implement a small operating
system, which can then be linked with applications to provide a
POSIX-compatible verification environment. Finally, we demonstrate the
viability of the approach through a comparison with a more
traditionally-designed LLVM model checker.Comment: 2017-04-19 / revision 3: add a missing author to arxiv metadata
2017-03-31 / revision 2: now with an experimental evaluatio
Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme
This paper addresses the problem of efficiently storing and accessing massive
data blocks in a large-scale distributed environment, while providing efficient
fine-grain access to data subsets. This issue is crucial in the context of
applications in the field of databases, data mining and multimedia. We propose
a data sharing service based on distributed, RAM-based storage of data, while
leveraging a DHT-based, natively parallel metadata management scheme. As
opposed to the most commonly used grid storage infrastructures that provide
mechanisms for explicit data localization and transfer, we provide a
transparent access model, where data are accessed through global identifiers.
Our proposal has been validated through a prototype implementation whose
preliminary evaluation provides promising results
Expressing the tacit knowledge of a digital library system as linked data
Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the "tacit" knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the "semantic data management" method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers' interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system's semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and "codifying" the tacit knowledge, which is necessary to improve the data interpretation process
- …
