5,896 research outputs found
The hunt for submarines in classical art: mappings between scientific invention and artistic interpretation
This is a report to the AHRC's ICT in Arts and Humanities Research Programme.
This report stems from a project which aimed to produce a series of mappings between advanced imaging information and communications technologies (ICT) and needs within visual arts research. A secondary aim was to demonstrate the feasibility of a structured approach to establishing such mappings.
The project was carried out over 2006, from January to December, by the visual arts centre of the Arts and Humanities Data Service (AHDS Visual Arts).1 It was funded by the Arts and Humanities Research Council (AHRC) as one of the Strategy Projects run under the aegis of its ICT in Arts and Humanities Research programme. The programme, which runs from October 2003 until September 2008, aims āto develop, promote and monitor the AHRCās ICT strategy, and to build capacity nation-wide in the use of ICT for arts and humanities researchā.2 As part of this, the Strategy Projects were intended to contribute to the programme in two ways: knowledge-gathering projects would inform the programmeās Fundamental Strategic Review of ICT, conducted for the AHRC in the second half of 2006, focusing āon critical strategic issues such as e-science and peer-review of digital resourcesā. Resource-development projects would ābuild tools and resources of broad relevance across the range of the AHRCās academic subject disciplinesā.3 This project fell into the knowledge-gathering strand.
The project ran under the leadership of Dr Mike Pringle, Director, AHDS Visual Arts, and the day-to-day management of Polly Christie, Projects Manager, AHDS Visual Arts. The research was carried out by Dr Rupert Shepherd
UK utility data integration: overcoming schematic heterogeneity
In this paper we discuss syntactic, semantic and schematic issues which inhibit the integration of utility data in the UK. We then focus on the techniques employed within the VISTA project to overcome schematic heterogeneity. A Global
Schema based architecture is employed. Although automated approaches to Global Schema definition were attempted
the heterogeneities of the sector were too great. A manual approach to Global Schema definition was employed. The
techniques used to define and subsequently map source utility data models to this schema are discussed in detail. In order to ensure a coherent integrated model, sub and cross domain validation issues are then highlighted. Finally the proposed framework and data flow for schematic integration is introduced
Enabling quantitative data analysis through e-infrastructures
This paper discusses how quantitative data analysis in the social sciences can engage with and exploit an e-Infrastructure. We highlight how a number of activities which are central to quantitative data analysis, referred to as ādata managementā, can benefit from e-infrastructure support. We conclude by discussing how these issues are relevant to the DAMES (Data Management through e-Social Science) research Node, an ongoing project that aims to develop e-Infrastructural resources for quantitative data analysis in the social sciences
A look at cloud architecture interoperability through standards
Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed
Mapping heterogeneous research infrastructure metadata into a unified catalogue for use in a generic virtual research environment
Virtual Research Environments (VREs), also known as science gateways or virtual laboratories, assist researchers
in data science by integrating tools for data discovery, data retrieval, workflow management
and researcher collaboration, often coupled with a specific computing infrastructure. Recently, the push
for better open data science has led to the creation of a variety of dedicated research infrastructures
(RIs) that gather data and provide services to different research communities, all of which can be used
independently of any specific VRE. There is therefore a need for generic VREs that can be coupled
with the resources of many different RIs simultaneously, easily customised to the needs of specific
communities. The resource metadata produced by these RIs rarely all adhere to any one standard
or vocabulary however, making it difficult to search and discover resources independently of their
providers without some translation into a common framework. Cross-RI search can be expedited by
using mapping services that harvest RI-published metadata to build unified resource catalogues, but
the development and operation of such services pose a number of challenges.
In this paper, we discuss some of these challenges and look specifically at the VRE4EIC Metadata
Portal, which uses X3ML mappings to build a single catalogue for describing data products and other
resources provided by multiple RIs. The Metadata Portal was built in accordance to the e-VRE Reference
Architecture, a microservice-based architecture for generic modular VREs, and uses the CERIF standard
to structure its catalogued metadata. We consider the extent to which it addresses the challenges of
cross-RI search, particularly in the environmental and earth science domain, and how it can be further
augmented, for example to take advantage of linked vocabularies to provide more intelligent semantic
search across multiple domains of discourse
An Innovative Workspace for The Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is an initiative to build the next
generation, ground-based gamma-ray observatories. We present a prototype
workspace developed at INAF that aims at providing innovative solutions for the
CTA community. The workspace leverages open source technologies providing web
access to a set of tools widely used by the CTA community. Two different user
interaction models, connected to an authentication and authorization
infrastructure, have been implemented in this workspace. The first one is a
workflow management system accessed via a science gateway (based on the Liferay
platform) and the second one is an interactive virtual desktop environment. The
integrated workflow system allows to run applications used in astronomy and
physics researches into distributed computing infrastructures (ranging from
clusters to grids and clouds). The interactive desktop environment allows to
use many software packages without any installation on local desktops
exploiting their native graphical user interfaces. The science gateway and the
interactive desktop environment are connected to the authentication and
authorization infrastructure composed by a Shibboleth identity provider and a
Grouper authorization solution. The Grouper released attributes are consumed by
the science gateway to authorize the access to specific web resources and the
role management mechanism in Liferay provides the attribute-role mapping
Towards Interoperable Research Infrastructures for Environmental and Earth Sciences
This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a āreference model guidedā engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions
- ā¦