223 research outputs found

    THE VALUE OF KNOWLEDGE THROUGH H-BIM MODELS: HISTORIC DOCUMENTATION WITH A SEMANTIC APPROACH

    Get PDF
    Abstract. The Building Information Modeling (BIM) in the Architectural Heritage field is constantly proving to be a fertile ground for the experimentation of innovative systems for the enhancement and management of cultural heritage. Regarding to the management of the entire process, the building field is increasing in efficiency from the construction to the management phase; conversely, the approach to historical buildings opens up interesting and heterogeneous scenarios, according to different levels of complexity. The presented work is the result of a collaboration between the Politecnico di Torino and the Escuela Técnica Superior de Arquitectura of Granada: the main scope was to create an historic building information model (H-BIM) of the building that today hosts the Faculty of Architecture (ETSAG), taking into account its historical past from the sixteenth century up to the present day, as the result of many modifications, extensions and different use classifications over time. According to this, the BIM methodology can be considered as a bridge between the archive documentation and the digital model, proving to be an effective tool as a data repository, semantically oriented, not only constituted by geometry, but also by alpha-numerical attributes, improving in effectiveness if it is directly related to formal language object oriented.</p

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: epub 2022-08-15, ppub 2022-10-01Publication status: PublishedFunder: UK Research and Innovation; Grant(s): ST/V006126/1, EP/V054236/1, EP/V033670/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs-a series of ideas, approaches and methods taken from existing visualization research and practice-deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue 'Technical challenges of modelling real-life epidemics and examples of overcoming these'

    A survey of visualisation for live cell imaging

    Get PDF
    Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualisation. Nonetheless, visualisation methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualisation developers to evaluate the advantages and disadvantages of different visualisation methods, and for visualisation researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualisation methods for live cell imaging from a visualisation research perspective for the first time. Based on recent visualisation theory, we perform a structured qualitative analysis of visualisation methods that includes characterising the domain and data, abstracting tasks, and describing visual encoding and interaction design. Based on our survey, we identify and discuss research gaps that future work should address: the broad analytical context of live cell imaging; the importance of behavioural comparisons; links with dynamic data visualisation; the consequences of different data modalities; shortcomings in interactive support; and, in addition to analysis, the value of the presentation of phenotypic data and insights to other stakeholders

    Provenance-aware knowledge representation: A survey of data models and contextualized knowledge graphs

    Get PDF
    Expressing machine-interpretable statements in the form of subject-predicate-object triples is a well-established practice for capturing semantics of structured data. However, the standard used for representing these triples, RDF, inherently lacks the mechanism to attach provenance data, which would be crucial to make automatically generated and/or processed data authoritative. This paper is a critical review of data models, annotation frameworks, knowledge organization systems, serialization syntaxes, and algebras that enable provenance-aware RDF statements. The various approaches are assessed in terms of standard compliance, formal semantics, tuple type, vocabulary term usage, blank nodes, provenance granularity, and scalability. This can be used to advance existing solutions and help implementers to select the most suitable approach (or a combination of approaches) for their applications. Moreover, the analysis of the mechanisms and their limitations highlighted in this paper can serve as the basis for novel approaches in RDF-powered applications with increasing provenance needs

    A Formal Framework for Linguistic Annotation

    Get PDF
    `Linguistic annotation' covers any descriptive or analytic notations applied to raw language data. The basic data may be in the form of time functions -- audio, video and/or physiological recordings -- or it may be textual. The added notations may include transcriptions of all sorts (from phonetic features to discourse structures), part-of-speech and sense tagging, syntactic analysis, `named entity' identification, co-reference annotation, and so on. While there are several ongoing efforts to provide formats and tools for such annotations and to publish annotated linguistic databases, the lack of widely accepted standards is becoming a critical problem. Proposed standards, to the extent they exist, have focussed on file formats. This paper focuses instead on the logical structure of linguistic annotations. We survey a wide variety of existing annotation formats and demonstrate a common conceptual core, the annotation graph. This provides a formal framework for constructing, maintaining and searching linguistic annotations, while remaining consistent with many alternative data structures and file formats.Comment: 49 page

    Music Encoding Conference Proceedings 2021, 19–22 July, 2021 University of Alicante (Spain): Onsite & Online

    Get PDF
    Este documento incluye los artículos y pósters presentados en el Music Encoding Conference 2021 realizado en Alicante entre el 19 y el 22 de julio de 2022.Funded by project Multiscore, MCIN/AEI/10.13039/50110001103

    Trust and context in cyberspace

    Get PDF
    Every day we place trust or reliance on other people and on inanimate objects, but trust may be diminished in the world of information resources and technology. We are often told that information needs higher standards of verification in digital realms than in the paper world. Similarly, when we encounter digital records and archives we may be uncertain how far we can trust them. In the past, trust in records was said to be reinforced by trust in archivists and archival institutions. However, trust in professional experts and institutions is waning; notions of expert objectivity are increasingly challenged. This paper explores an idea proposed by David Weinberger, that ‘transparency is the new objectivity’. Where records are concerned, documentation of provenance and context forms a basis for enhancing their transparency and thus for evaluating their trustworthiness. Many commentators have expressed anxiety that, in digital environments where resources are reused and remixed at will, records may become decontextualized. But in computer science questions are now being asked about how data can be trusted and verified, and knowledge of their provenance is increasingly seen as a foundation for enabling trust. Many computer scientists argue that, while data should be reusable, each piece of data should carry evidence of its history and original contexts to help those who encounter it to judge its trustworthiness. Some researchers have set out to develop systems to capture and preserve information about data provenance. In the longer term, this research may help archivists meet the challenges of gathering and maintaining contextual information in the world of digital record-keeping. Methods of automatically harvesting certain kinds of contextual information are under investigation; automated solutions are likely to expedite what are currently time-consuming manual processes. However, merely being presented with information about provenance is not enough. Insofar as individuals or institutions supply us with that information, we have to decide how far we trust what those people or institutions tell us. There is still a place for expert voices, but experts cannot be seen as infallible providers of objective information

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations

    Get PDF
    From The Royal Society via Jisc Publications RouterHistory: received 2021-10-14, accepted 2022-03-18, pub-electronic 2022-08-15, pub-print 2022-10-03Article version: VoRPublication status: PublishedFunder: UK Research and Innovation; Id: http://dx.doi.org/10.13039/100014013; Grant(s): EP/V033670/1, EP/V054236/1, ST/V006126/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs—a series of ideas, approaches and methods taken from existing visualization research and practice—deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’

    Which service interfaces fit the model web?

    Get PDF
    Ponència presentada a The Fourth International Conference on Advanced Geographic Information Systems, Applications, and Services, GEOProcessing 2012, celebrat a València del 30 de gener al 4 de febrer de 2012The Model Web has been proposed as a concept for integrating scientific models in an interoperable and collaborative manner. However, four years after the initial idea was formulated, there is still no stable long term solution. Multiple authors propose Web Service based approaches to model publication and chaining, but current implementations are highly case specific and lack flexibility. This paper discusses the Web Service interfaces, which are required for supporting integrated environmental modeling in a sustainable manner. We explore ways to expose environmental models and their components using Web Service interfaces. Our discussions present work in progress for establishing the Web Services technological grounds for simp lifying information publication and exchange within the Model We b. As a main outcome, this contribution identifies challenges in respect to the required geo- processing and relates them to currently available Web Service standards
    • …
    corecore