15,862 research outputs found
Web-based platform to collect, share and manage technical data of historical systemic architectures: the Telegraphic Towers along the Madrid-Valencia path
Considering the variety of architectural Cultural Heritage typologies, systemic architectures require specific attention in the recovery process. The dimensions of "extension" and "recurrence" at geographic and technological levels affect the complexity of their knowledge process; they require systematic ways for their categorisation and comprehension to guarantee correct diagnosis and suitable rehabilitation. Recent applications involving Internet of Things (IoT) for the built Cultural Heritage have demonstrated the potentialities of three-dimensional (3D) geographic information system (GIS) models and structured databases in supporting complex degrees of knowledge for technicians, as well as management for administrators. Starting from such experiences, the work presents the setting up of a web-based platform to support the knowledge and management of systemic architectures, considering the geographical distribution of fabrics, natural and anthropic boundary conditions, and technical and administrative details. The platform takes advantage of digital models, machine and deep learning procedures and relational databases, in a GIS-based environment, for the recognition and categorisation of prevalent physical and qualitative features of systemic architectures, the recognition and qualification of dominant and recurrent decays and the management of recovery activities in a semi-automatic way. Specifically, the main digital objects used for testing the applied techniques and setting up the platform are based on Red-Green-Blue (RGB) and mapped point clouds of the historical Telegraphic Towers located along the Madrid-Valencia path, resulting from the on-site investigations. Their choice is motivated by the high level of knowledge about the cases reached in the last years by the authors, allowing them to test rules within the decision support systems and innovative techniques for their decay mapping. As the experience has demonstrated, the systematisation of technical details and operative pipeline of methods and tools allow the normalisation and standardisation of the intervention selection process; this offers policymakers an innovative tool based on traditional procedures for conservation plans, coherent with a priority-based practice
Recommended from our members
A dynamic knowledge graph approach to distributed self-driving laboratories
Acknowledgements: This research was supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE) programme, and Pharma Innovation Platform Singapore (PIPS) via grant to CARES Ltd “Data2Knowledge, C12”. This project was cofunded by European Regional Development Fund via the project “Innovation Centre in Digital Molecular Technologies”, UKRI via project EP/S024220/1 “EPSRC Centre for Doctoral Training in Automated Chemical Synthesis Enabled by Digital Molecular Technologies”. Part of this work was also supported by Towards Turing 2.0 under the EPSRC Grant EP/W037211/1. The authors thank Dr. Andrew C. Breeson for his helpful suggestions on graphical design. J.B. acknowledges financial support provided by CSC Cambridge International Scholarship from Cambridge Trust and China Scholarship Council. C.J.T. is a Sustaining Innovation Postdoctoral Research Associate at Astex Pharmaceuticals and thanks Astex Pharmaceuticals for funding, as well as his Astex colleagues Chris Johnson, Rachel Grainger, Mark Wade, Gianni Chessari, and David Rees for their support. S.D.R. acknowledges financial support from Fitzwilliam College, Cambridge, and the Cambridge Trust. M.K. gratefully acknowledges the support of the Alexander von Humboldt Foundation. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.Funder: Alexander von Humboldt-Stiftung (Alexander von Humboldt Foundation); doi: https://doi.org/100005156Funder: Pharma Innovation Platform Singapore (PIPS) via grant to CARES 5 Ltd “Data2Knowledge, C12”AbstractThe ability to integrate resources and share knowledge across organisations empowers scientists to expedite the scientific discovery process. This is especially crucial in addressing emerging global challenges that require global solutions. In this work, we develop an architecture for distributed self-driving laboratories within The World Avatar project, which seeks to create an all-encompassing digital twin based on a dynamic knowledge graph. We employ ontologies to capture data and material flows in design-make-test-analyse cycles, utilising autonomous agents as executable knowledge components to carry out the experimentation workflow. Data provenance is recorded to ensure its findability, accessibility, interoperability, and reusability. We demonstrate the practical application of our framework by linking two robots in Cambridge and Singapore for a collaborative closed-loop optimisation for a pharmaceutically-relevant aldol condensation reaction in real-time. The knowledge graph autonomously evolves toward the scientist’s research goals, with the two robots effectively generating a Pareto front for cost-yield optimisation in three days.</jats:p
Recommended from our members
EURO-NMD registry: federated FAIR infrastructure, innovative technologies and concepts of a patient-centred registry for rare neuromuscular disorders
Abstract
Background
The EURO-NMD Registry collects data from all neuromuscular patients seen at EURO-NMD's expert centres. In-kind contributions from three patient organisations have ensured that the registry is patient-centred, meaningful, and impactful. The consenting process covers other uses, such as research, cohort finding and trial readiness.
Results
The registry has three-layered datasets, with European Commission-mandated data elements (EU-CDEs), a set of cross-neuromuscular data elements (NMD-CDEs) and a dataset of disease-specific data elements that function modularly (DS-DEs). The registry captures clinical, neuromuscular imaging, neuromuscular histopathology, biological and genetic data and patient-reported outcomes in a computer-interpretable format using selected ontologies and classifications. The EURO-NMD registry is connected to the EURO-NMD Registry Hub through an interoperability layer. The Hub provides an entry point to other neuromuscular registries that follow the FAIR data stewardship principles and enable GDPR-compliant information exchange. Four national or disease-specific patient registries are interoperable with the EURO-NMD Registry, allowing for federated analysis across these different resources.
Conclusions
Collectively, the Registry Hub brings together data that are currently siloed and fragmented to improve healthcare and advance research for neuromuscular diseases.
</jats:sec
Semantic rules for capability matchmaking in the context of manufacturing system design and reconfiguration
To survive in dynamic markets and meet the changing requirements, manufacturing companies must rapidly design new production systems and reconfigure existing ones. The current designer-centric search of feasible resources from various catalogues is a time-consuming and laborious process, which limits the consideration of many different alternative resource solutions. This article presents the implementation of an automatic capability matchmaking approach and software, which searches through resource catalogues to find feasible resources and resource combinations for the processing requirements of the product. The approach is based on formal ontology-based descriptions of both products and resources and the semantic rules used to find the matches. The article focuses on these rules implemented with SPIN rule language. They relate to 1) inferring and asserting parameters of combined capabilities of combined resources and 2) comparison of the product characteristics against the capability parameters of the resource (combination). The presented case study proves that the matchmaking system can find feasible matches. However, a human designer must validate the result when making the final resource selection. The approach should speed up the system design and reconfiguration planning and allow more alternative solutions be considered, compared with traditional manual design approaches.publishedVersionPeer reviewe
Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review
Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, which allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects
Functional genomic effects of indels using Bayesian genome-phenome wide association studies in sorghum
High-throughput genomic and phenomic data have enhanced the ability to detect genotype-to-phenotype associations that can resolve broad pleiotropic effects of mutations on plant phenotypes. As the scale of genotyping and phenotyping has advanced, rigorous methodologies have been developed to accommodate larger datasets and maintain statistical precision. However, determining the functional effects of associated genes/loci is expensive and limited due to the complexity associated with cloning and subsequent characterization. Here, we utilized phenomic imputation of a multi-year, multi-environment dataset using PHENIX which imputes missing data using kinship and correlated traits, and we screened insertions and deletions (InDels) from the recently whole-genome sequenced Sorghum Association Panel for putative loss-of-function effects. Candidate loci from genome-wide association results were screened for potential loss of function using a Bayesian Genome-Phenome Wide Association Study (BGPWAS) model across both functionally characterized and uncharacterized loci. Our approach is designed to facilitate in silico validation of associations beyond traditional candidate gene and literature-search approaches and to facilitate the identification of putative variants for functional analysis and reduce the incidence of false-positive candidates in current functional validation methods. Using this Bayesian GPWAS model, we identified associations for previously characterized genes with known loss-of-function alleles, specific genes falling within known quantitative trait loci, and genes without any previous genome-wide associations while additionally detecting putative pleiotropic effects. In particular, we were able to identify the major tannin haplotypes at the Tan1 locus and effects of InDels on the protein folding. Depending on the haplotype present, heterodimer formation with Tan2 was significantly affected. We also identified major effect InDels in Dw2 and Ma1, where proteins were truncated due to frameshift mutations that resulted in early stop codons. These truncated proteins also lost most of their functional domains, suggesting that these indels likely result in loss of function. Here, we show that the Bayesian GPWAS model is able to identify loss-of-function alleles that can have significant effects upon protein structure and folding as well as multimer formation. Our approach to characterize loss-of-function mutations and their functional repercussions will facilitate precision genomics and breeding by identifying key targets for gene editing and trait integration
Ecological and confined domain ontology construction scheme using concept clustering for knowledge management
Knowledge management in a structured system is a complicated task that requires common, standardized methods that are acceptable to all actors in a system. Ontology, in this regard, is a primary element and plays a central role in knowledge management, interoperability between various departments, and better decision making. The ontology construction for structured systems comprises logical and structural complications. Researchers have already proposed a variety of domain ontology construction schemes. However, these schemes do not involve some important phases of ontology construction that make ontologies more collaborative. Furthermore, these schemes do not provide details of the activities and methods involved in the construction of an ontology, which may cause difficulty in implementing the ontology. The major objectives of this research were to provide a comparison between some existing ontology construction schemes and to propose an enhanced ecological and confined domain ontology construction (EC-DOC) scheme for structured knowledge management. The proposed scheme introduces five important phases to construct an ontology, with a major focus on the conceptualizing and clustering of domain concepts. In the conceptualization phase, a glossary of domain-related concepts and their properties is maintained, and a Fuzzy C-Mean soft clustering mechanism is used to form the clusters of these concepts. In addition, the localization of concepts is instantly performed after the conceptualization phase, and a translation file of localized concepts is created. The EC-DOC scheme can provide accurate concepts regarding the terms for a specific domain, and these concepts can be made available in a preferred local language
Distributed Text Services (DTS): A Community-Built API to Publish and Consume Text Collections as Linked Data
This paper presents the Distributed Text Service (DTS) API Specification, a community-built effort to facilitate the publication and consumption of texts and their structures as Linked Data. DTS was designed to be as generic as possible, providing simple operations for navigating collections, navigating within a text, and retrieving textual content. While the DTS API uses JSON-LD as the serialization format for non-textual data (e.g., descriptive metadata), TEI XML was chosen as the minimum required format for textual data served by the API in order to guarantee the interoperability of data published by DTS-compliant repositories. This paper describes the DTS API specifications by means of real-world examples, discusses the key design choices that were made, and concludes by providing a list of existing repositories and libraries that support DTS
Semantics, Ontology and Explanation
The terms 'semantics' and 'ontology' are increasingly appearing together with
'explanation', not only in the scientific literature, but also in
organizational communication. However, all of these terms are also being
significantly overloaded. In this paper, we discuss their strong relation under
particular interpretations. Specifically, we discuss a notion of explanation
termed ontological unpacking, which aims at explaining symbolic domain
descriptions (conceptual models, knowledge graphs, logical specifications) by
revealing their ontological commitment in terms of their assumed truthmakers,
i.e., the entities in one's ontology that make the propositions in those
descriptions true. To illustrate this idea, we employ an ontological theory of
relations to explain (by revealing the hidden semantics of) a very simple
symbolic model encoded in the standard modeling language UML. We also discuss
the essential role played by ontology-driven conceptual models (resulting from
this form of explanation processes) in properly supporting semantic
interoperability tasks. Finally, we discuss the relation between ontological
unpacking and other forms of explanation in philosophy and science, as well as
in the area of Artificial Intelligence
- …