11,635 research outputs found
A semantic web approach for built heritage representation
In a built heritage process, meant as a structured system of activities
aimed at the investigation, preservation, and management of architectural
heritage, any task accomplished by the several actors involved in it is deeply
influenced by the way the knowledge is represented and shared. In the current
heritage practice, knowledge representation and management have shown several
limitations due to the difficulty of dealing with large amount of extremely heterogeneous
data. On this basis, this research aims at extending semantic web
approaches and technologies to architectural heritage knowledge management in
order to provide an integrated and multidisciplinary representation of the artifact
and of the knowledge necessary to support any decision or any intervention and
management activity. To this purpose, an ontology-based system, representing
the knowledge related to the artifact and its contexts, has been developed through
the formalization of domain-specific entities and relationships between them
Serverâside workflow execution using data grid technology for reproducible analyses of dataâintensive hydrologic systems
Many geoscience disciplines utilize complex computational models for advancing understanding and sustainable management of Earth systems. Executing such models and their associated data preprocessing and postprocessing routines can be challenging for a number of reasons including (1) accessing and preprocessing the large volume and variety of data required by the model, (2) postprocessing large data collections generated by the model, and (3) orchestrating data processing tools, each with unique software dependencies, into workflows that can be easily reproduced and reused. To address these challenges, the work reported in this paper leverages the Workflow Structured Object functionality of the Integrated RuleâOriented Data System and demonstrates how it can be used to access distributed data, encapsulate hydrologic data processing as workflows, and federate with other communityâdriven cyberinfrastructure systems. The approach is demonstrated for a study investigating the impact of drought on populations in the Carolinas region of the United States. The analysis leverages computational modeling along with data from the Terra Populus project and data management and publication services provided by the Sustainable EnvironmentâActionable Data project. The work is part of a larger effort under the DataNet Federation Consortium project that aims to demonstrate data and computational interoperability across cyberinfrastructure developed independently by scientific communities.Plain Language SummaryExecuting computational workflows in the geosciences can be challenging, especially when dealing with large, distributed, and heterogeneous data sets and computational tools. We present a methodology for addressing this challenge using the Integrated RuleâOriented Data System (iRODS) Workflow Structured Object (WSO). We demonstrate the approach through an endâtoâend application of data access, processing, and publication of digital assets for a scientific study analyzing drought in the Carolinas region of the United States.Key PointsReproducibility of dataâintensive analyses remains a significant challengeData grids are useful for reproducibility of workflows requiring large, distributed data setsData and computations should be coâlocated on servers to create executable WebâresourcesPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137520/1/ess271_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137520/2/ess271.pd
Pathways: Augmenting interoperability across scholarly repositories
In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries
special issue on Digital Libraries and eScienc
What is an Analogue for the Semantic Web and Why is Having One Important?
This paper postulates that for the Semantic Web to grow and gain input from fields that will surely benefit it, it needs to develop an analogue that will help people not only understand what it is, but what the potential opportunities are that are enabled by these new protocols. The model proposed in the paper takes the way that Web interaction has been framed as a baseline to inform a similar analogue for the Semantic Web. While the Web has been represented as a Page + Links, the paper presents the argument that the Semantic Web can be conceptualized as a Notebook + Memex. The argument considers how this model also presents new challenges for fundamental human interaction with computing, and that hypertext models have much to contribute to this new understanding for distributed information systems
InfoHarness: Managing Distributed, Heterogeneous Information
Today, important information is scattered in so many places, formats, and media, that getting the right information at the right time and place is an extremely difficult task. Developing a single software product, for example, includes the creation of documents ranging from the requirements specification and project schedules to marketing presentations, multimedia tutorials, and more. Each document may be created by a different person using a different tool, and each may be stored in a different place. InfoHarness is an information integration system, platform, and tool set that addresses these problems, managing huge amounts of heterogeneous information in a distributed environment. Through a powerful, consistent user interface, InfoHarness provides rapid search of and access to information assets including documents and parts of documents, mail messages, images, code files, video clips, Web pages with URLs, InfoHarness queries, and views of relational tables. The system makes all these artifacts available without relocating, restructuring, or reformatting the data
Current trends on ICT technologies for enterprise information sÂČystems
The proposed paper discusses the current trends on ICT technologies for Enterprise Information Systems. The paper starts by defining four big challenges of the next generation of information systems: (1) Data Value Chain Management; (2) Context Awareness; (3) Interaction and Visualization; and (4) Human Learning. The major contributions towards the next generation of information systems are elaborated based on the work and experience of the authors and their teams. This includes: (1) Ontology based solutions for semantic interoperability; (2) Context aware infrastructures; (3) Product Avatar based interactions; and (4) Human learning. Finally the current state of research is discussed highlighting the impact of these solutions on the economic and social landscape
Embracing the future: embedding digital repositories in the University of London
Digital repositories can help Higher Education Institutions (HEIs) to develop coherent and coordinated approaches to capture, identify, store and retrieve intellectual assets such as datasets, course material and research papers. With the advances of technology, an increasing number of Higher Education Institutions are implementing digital repositories. The leadership of these institutions, however, has been concerned about the awareness of and commitment to repositories, and their sustainability in the future.
This study informs a consortium of thirteen London institutions with an assessment of current awareness and attitudes of stakeholders regarding digital repositories in three case study institutions. The report identifies drivers for, and barriers to, the embedding of digital repositories in institutional strategy. The findings therefore should be of use to decision-makers involved in the development of digital repositories. Our approach was entirely based on consultations with specific groups of stakeholders in three institutions through interviews with specific individuals.
The research in this report was prepared for the SHERPA-LEAP Consortium and conducted by RAND Europe
Generating collaborative systems for digital libraries: A model-driven approach
This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework
4D Reconstruction and Visualization of Cultural Heritage: Analysing our Legacy Through Time
Temporal analyses and multi-temporal 3D reconstruction are fundamental for the preservation and maintenance of all forms of Cultural Heritage (CH) and are the basis for decisions related to interventions and promotion. Introducing the fourth dimension of time into three-dimensional geometric modelling of real data allows the creation of a multi-temporal representation of a site. In this way, scholars from various disciplines (surveyors, geologists, archaeologists, architects, philologists, etc.) are provided with a new set of tools and working methods to support the study of the evolution of heritage sites, both to develop hypotheses about the past and to model likely future developments. The capacity to âseeâ the dynamic evolution of CH assets across different spatial scales (e.g. building, site, city or territory) compressed in diachronic model, affords the possibility to better understand the present status of CH according to its history. However, there are numerous challenges in order to carry out 4D modelling and the requisite multi-data source integration. It is necessary to identify the specifications, needs and requirements of the CH community to understand the required levels of 4D model information. In this way, it is possible to determine the optimum material and technologies to be utilised at different CH scales, as well as the data management and visualization requirements. This manuscript aims to provide a comprehensive approach for CH time-varying representations, analysis and visualization across different working scales and environments: rural landscape, urban landscape and architectural scales. Within this aim, the different available metric data sources are systemized and evaluated in terms of their suitability
- âŠ