221 research outputs found

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    One Digital Health is FAIR

    Get PDF
    The One Digital Health framework aims at transforming future health ecosystems and guiding the implementation of a digital technologies-based systemic approach to caring for humans' and animals' health in a managed surrounding environment. To integrate and to use the data generated by the ODH data sources, "FAIRness" stands as a prerequisite for proper data management and stewardship

    On the Move to Meaningful Internet Systems: OTM 2015 Workshops: Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, EI2N, FBM, INBAST, ISDE, META4eS, and MSC 2015, Rhodes, Greece, October 26-30, 2015. Proceedings

    Get PDF
    International audienceThis volume constitutes the refereed proceedings of the following 8 International Workshops: OTM Academy; OTM Industry Case Studies Program; Enterprise Integration, Interoperability, and Networking, EI2N; International Workshop on Fact Based Modeling 2015, FBM; Industrial and Business Applications of Semantic Web Technologies, INBAST; Information Systems, om Distributed Environment, ISDE; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society, META4eS; and Mobile and Social Computing for collaborative interactions, MSC 2015. These workshops were held as associated events at OTM 2015, the federated conferences "On The Move Towards Meaningful Internet Systems and Ubiquitous Computing", in Rhodes, Greece, in October 2015.The 55 full papers presented together with 3 short papers and 2 popsters were carefully reviewed and selected from a total of 100 submissions. The workshops share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies

    Challenge 6: Open Science: reproducibility, transparency and reliability

    Get PDF
    Open Science is becoming a new paradigm in scientific research and complex changes are being done. This new way in knowledge development requires a great transformation that will allow science to adapt efficiently and effectively to the urgency of the problems to be solved while ensuring the reproducibility, transparency and reliability of scientific results. This chapter analyzes the impact of this change of model, the challenges to be addressed and the expected benefits.Peer reviewe

    HPC-oriented Canonical Workflows for Machine Learning Applications in Climate and Weather Prediction

    Get PDF
    Machine learning (ML) applications in weather and climate are gaining momentum as big data and the immense increase in High-performance computing (HPC) power are paving the way. Ensuring FAIR data and reproducible ML practices are significant challenges for Earth system researchers. Even though the FAIR principle is well known to many scientists, research communities are slow to adopt them. Canonical Workflow Framework for Research (CWFR) provides a platform to ensure the FAIRness and reproducibility of these practices without overwhelming researchers. This conceptual paper envisions a holistic CWFR approach towards ML applications in weather and climate, focusing on HPC and big data. Specifically, we discuss Fair Digital Object (FDO) and Research Object (RO) in the DeepRain project to achieve granular reproducibility. DeepRain is a project that aims to improve precipitation forecast in Germany by using ML. Our concept envisages the raster datacube to provide data harmonization and fast and scalable data access. We suggest the Juypter notebook as a single reproducible experiment. In addition, we envision JuypterHub as a scalable and distributed central platform that connects all these elements and the HPC resources to the researchers via an easy-to-use graphical interface

    Pattern-based access control in a decentralised collaboration environment

    Get PDF
    As the building industry is rapidly catching up with digital advancements, and Web technologies grow in both maturity and security, a data- and Web-based construction practice comes within reach. In such an environment, private project information and open online data can be combined to allow cross-domain interoperability at data level, using Semantic Web technologies. As construction projects often feature complex and temporary networks of stakeholder firms and their employees, a property-based access control mechanism is necessary to enable a flexible and automated management of distributed building projects. In this article, we propose a method to facilitate such mechanism using existing Web technologies: RDF, SHACL, WebIDs, nanopublications and the Linked Data Platform. The proposed method will be illustrated with an extension of a custom nodeJS Solid server. The potential of the Solid ecosystem has been put forward earlier as a basis for a Linked Data-based Common Data Environment: its decentralised setup, connection of both RDF and non-RDF resources and fine-grained access control mechanisms are considered an apt foundation to manage distributed building data

    Advancing Research Data Management in Universities of Science and Technology

    Get PDF
    The white paper ‘Advancing Research Data Management in Universities of Science and Technology’ shares insights on the state-of-the-art in research data management, and recommendations for advancement. A core part of the paper are the results of a survey, which was distributed to our member institutions in 2019 and addressed the following aspects of research data management (RDM): (i) the establishment of a RDM policy at the university; (ii) the provision of suitable RDM infrastructure and tools; and (iii) the establishment of RDM support services and trainings tailored to the requirements of science and technology disciplines. The paper reveals that while substantial progress has been made, there is still a long way to go when it comes to establishing “advanced-degree programmes at our major universities for the emerging field of data scientist”, as recommended in the seminal 2010 report ‘Riding the Wave’, and our white paper offers concrete recommendations and best practices for university leaders, researchers, operational staff, and policy makers. The topic of RDM has become a focal point in many scientific disciplines, in Europe and globally. The management and full utilisation of research data are now also at the top of the European agenda, as exemplified by Ursula von der Leyen addressat this year’s World Economic Forum.However, the implementation of RDM remains divergent across Europe. The white paper was written by a diverse team of RDM specialists, including data scientists and data stewards, with the work led by the RDM subgroup of our Task Force Open Science. The writing team included Angelina Kraft (Head of Lab Research Data Services at TIB, Leibniz University Hannover) who said: “The launch of RDM courses and teaching materials at universities of science and technology is a first important step to motivate people to manage their data. Furthermore, professors and PIs of all disciplines should actively support data management and motivate PhD students to publish their data in recognised digital repositories.” Another part of the writing team was Barbara Sanchez (Head of Centre for Research Data Management, TU Wien) and Malgorzata Goraczek (International Research Support / Data Management Support, TU Wien) who added:“A reliable research data infrastructure is a central component of any RDM service. In addition to the infrastructure, proper RDM is all about communication and cooperation. This includes bringing tools, infrastructures, staff and units together.” Alastair Dunning (Head of 4TU.ResearchData, Delft University of Technology), also one of the writers, added: “There is a popular misconception that better research data management only means faster and more efficient computers. In this white paper, we emphasise the role that training and a culture of good research data management must play.

    Secure Development of Big Data Ecosystems

    Get PDF
    A Big Data environment is a powerful and complex ecosystem that helps companies extract important information from data to make the best business and strategic decisions. In this context, due to the quantity, variety, and sensitivity of the data managed by these systems, as well as the heterogeneity of the technologies involved, privacy and security especially become crucial issues. However, ensuring these concerns in Big Data environments is not a trivial issue, and it cannot be treated from a partial or isolated perspective. It must be carried out through a holistic approach, starting from the definition of requirements and policies, and being present in any relevant activity of its development and deployment. Therefore, in this paper, we propose a methodological approach for integrating security and privacy in Big Data development based on main standards and common practices. In this way, we have defined a development process for this kind of ecosystems that considers not only security in all the phases of the process but also the inherent characteristics of Big Data. We describe this process through a set of phases that covers all the relevant stages of the development of Big Data environments, which are supported by a customized security reference architecture (SRA) that defines the main components of this kind of systems along with the key concepts of security
    corecore