89 research outputs found
Knowledge Patterns for the Web: extraction, tranformation and reuse
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the
transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step
of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path.
Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST.
Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization
Automatically Drafting Ontologies from Competency Questions with FrODO
We present the Frame-based ontology Design Outlet (FrODO), a novel method and
tool for drafting ontologies from competency questions automatically.
Competency questions are expressed as natural language and are a common
solution for representing requirements in a number of agile ontology
engineering methodologies, such as the eXtreme Design (XD) or SAMOD. FrODO
builds on top of FRED. In fact, it leverages the frame semantics for drawing
domain-relevant boundaries around the RDF produced by FRED from a competency
question, thus drafting domain ontologies. We carried out a user-based study
for assessing FrODO in supporting engineers for ontology design tasks. The
study shows that FrODO is effective in this and the resulting ontology drafts
are qualitative.Comment: 15 page
A Reference Software Architecture for Social Robots
Social Robotics poses tough challenges to software designers who are required
to take care of difficult architectural drivers like acceptability, trust of
robots as well as to guarantee that robots establish a personalised interaction
with their users. Moreover, in this context recurrent software design issues
such as ensuring interoperability, improving reusability and customizability of
software components also arise.
Designing and implementing social robotic software architectures is a
time-intensive activity requiring multi-disciplinary expertise: this makes
difficult to rapidly develop, customise, and personalise robotic solutions.
These challenges may be mitigated at design time by choosing certain
architectural styles, implementing specific architectural patterns and using
particular technologies.
Leveraging on our experience in the MARIO project, in this paper we propose a
series of principles that social robots may benefit from. These principles lay
also the foundations for the design of a reference software architecture for
Social Robots. The ultimate goal of this work is to establish a common ground
based on a reference software architecture to allow to easily reuse robotic
software components in order to rapidly develop, implement, and personalise
Social Robots
Do altmetrics work for assessing research quality?
Alternative metrics (aka altmetrics) are gaining increasing interest in the scientometrics community as they can capture both the volume and quality of attention that a research work receives online. Nevertheless, there is limited knowledge about their effectiveness as a mean for measuring the impact of research if compared to traditional citation-based indicators. This work aims at rigorously investigating if any correlation exists among indicators, either traditional (i.e. citation count and h-index) or alternative (i.e. altmetrics) and which of them may be effective for evaluating scholars. The study is based on the analysis of real data coming from the National Scientific Qualification procedure held in Italy by committees of peers on behalf of the Italian Ministry of Education, Universities and Research
SQuAP-Ont: an ontology of software quality relational factors from financial systems
Quality, architecture, and process are considered the keystones of software engineering. ISO defines them in three separate standards. However, their interaction has been scarcely studied, so far. The SQuAP model (Software Quality, Architecture, Process) describes twenty-eight main factors that impact on software quality in banking systems, and each factor is described as a relation among some characteristics from the three ISO standards. Hence, SQuAP makes such relations emerge rigorously, although informally. In this paper, we present SQuAP-Ont, an OWL ontology designed by following a well established methodology based on the re-use of Ontology Design Patterns (i.e. ODPs). SQuAP-Ont formalises the relations emerging from SQuAP in order to represent and reason via Linked Data about software engineering in a three-dimensional model consisting of quality, architecture, and process ISO characteristics
Adoption of Digital Technologies in Health Care During the COVID-19 Pandemic: Systematic Review of Early Scientific Literature
Background: The COVID-19 pandemic is favoring digital transitions in many industries and in society as a whole. Health care organizations have responded to the first phase of the pandemic by rapidly adopting digital solutions and advanced technology tools. Objective: The aim of this review is to describe the digital solutions that have been reported in the early scientific literature to mitigate the impact of COVID-19 on individuals and health systems. Methods: We conducted a systematic review of early COVID-19-related literature (from January 1 to April 30, 2020) by searching MEDLINE and medRxiv with appropriate terms to find relevant literature on the use of digital technologies in response to the pandemic. We extracted study characteristics such as the paper title, journal, and publication date, and we categorized the retrieved papers by the type of technology and patient needs addressed. We built a scoring rubric by cross-classifying the patient needs with the type of technology. We also extracted information and classified each technology reported by the selected articles according to health care system target, grade of innovation, and scalability to other geographical areas. Results: The search identified 269 articles, of which 124 full-text articles were assessed and included in the review after screening. Most of the selected articles addressed the use of digital technologies for diagnosis, surveillance, and prevention. We report that most of these digital solutions and innovative technologies have been proposed for the diagnosis of COVID-19. In particular, within the reviewed articles, we identified numerous suggestions on the use of artificial intelligence (AI)-powered tools for the diagnosis and screening of COVID-19. Digital technologies are also useful for prevention and surveillance measures, such as contact-tracing apps and monitoring of internet searches and social media usage. Fewer scientific contributions address the use of digital technologies for lifestyle empowerment or patient engagement. Conclusions: In the field of diagnosis, digital solutions that integrate with traditional methods, such as AI-based diagnostic algorithms based both on imaging and clinical data, appear to be promising. For surveillance, digital apps have already proven their effectiveness; however, problems related to privacy and usability remain. For other patient needs, several solutions have been proposed, such as telemedicine or telehealth tools. These tools have long been available, but this historical moment may actually be favoring their definitive large-scale adoption. It is worth taking advantage of the impetus provided by the crisis; it is also important to keep track of the digital solutions currently being proposed to implement best practices and models of care in future and to adopt at least some of the solutions proposed in the scientific literature, especially in national health systems, which have proved to be particularly resistant to the digital transition in recent years
Semi-Automatic Systematic Literature Reviews and Information Extraction of COVID-19 Scientific Evidence: Description and Preliminary Results of the COKE Project
The COVID-19 pandemic highlighted the importance of validated and updated scientific
information to help policy makers, healthcare professionals, and the public. The speed in disseminating reliable information and the subsequent guidelines and policy implementation are also essential
to save as many lives as possible. Trustworthy guidelines should be based on a systematic evidence
review which uses reproducible analytical methods to collect secondary data and analyse them.
However, the guidelines’ drafting process is time consuming and requires a great deal of resources.
This paper aims to highlight the importance of accelerating and streamlining the extraction and
synthesis of scientific evidence, specifically within the systematic review process. To do so, this paper
describes the COKE (COVID-19 Knowledge Extraction framework for next generation discovery science) Project, which involves the use of machine reading and deep learning to design and implement
a semi-automated system that supports and enhances the systematic literature review and guideline
drafting processes. Specifically, we propose a framework for aiding in the literature selection and
navigation process that employs natural language processing and clustering techniques for selecting
and organizing the literature for human consultation, according to PICO (Population/Problem, Intervention, Comparison, and Outcome) elements. We show some preliminary results of the automatic
classification of sentences on a dataset of abstracts related to COVID-19
Pattern-based design applied to cultural heritage knowledge graphs
Ontology Design Patterns (ODPs) have become an established and recognised
practice for guaranteeing good quality ontology engineering. There are several
ODP repositories where ODPs are shared as well as ontology design methodologies
recommending their reuse. Performing rigorous testing is recommended as well
for supporting ontology maintenance and validating the resulting resource
against its motivating requirements. Nevertheless, it is less than
straightforward to find guidelines on how to apply such methodologies for
developing domain-specific knowledge graphs. ArCo is the knowledge graph of
Italian Cultural Heritage and has been developed by using eXtreme Design (XD),
an ODP- and test-driven methodology. During its development, XD has been
adapted to the need of the CH domain e.g. gathering requirements from an open,
diverse community of consumers, a new ODP has been defined and many have been
specialised to address specific CH requirements. This paper presents ArCo and
describes how to apply XD to the development and validation of a CH knowledge
graph, also detailing the (intellectual) process implemented for matching the
encountered modelling problems to ODPs. Relevant contributions also include a
novel web tool for supporting unit-testing of knowledge graphs, a rigorous
evaluation of ArCo, and a discussion of methodological lessons learned during
ArCo development
The Landscape of Ontology Reuse Approaches
Ontology reuse aims to foster interoperability and facilitate knowledge
reuse. Several approaches are typically evaluated by ontology engineers when
bootstrapping a new project. However, current practices are often motivated by
subjective, case-by-case decisions, which hamper the definition of a
recommended behaviour. In this chapter we argue that to date there are no
effective solutions for supporting developers' decision-making process when
deciding on an ontology reuse strategy. The objective is twofold: (i) to survey
current approaches to ontology reuse, presenting motivations, strategies,
benefits and limits, and (ii) to analyse two representative approaches and
discuss their merits
- …