1,435 research outputs found
Personalising lung cancer screening with machine learning
Personalised screening is based on a straightforward concept: repeated risk assessment linked to tailored management. However, delivering such programmes at scale is complex. In this work, I aimed to contribute to two areas: the simplification of risk assessment to facilitate the implementation of personalised screening for lung cancer; and, the use of synthetic data to support privacy-preserving analytics in the absence of access to patient records.
I first present parsimonious machine learning models for lung cancer screening, demonstrating an approach that couples the performance of model-based risk prediction with the simplicity of risk-factor-based criteria. I trained models to predict the five-year risk of developing or dying from lung cancer using UK Biobank and US National Lung Screening Trial participants before external validation amongst temporally and geographically distinct ever-smokers in the US Prostate, Lung, Colorectal and Ovarian Screening trial. I found that three predictors – age, smoking duration, and pack-years – within an ensemble machine learning framework achieved or exceeded parity in discrimination, calibration, and net benefit with comparators. Furthermore, I show that these models are more sensitive than risk-factor-based criteria, such as those currently recommended by the US Preventive Services Taskforce.
For the implementation of more personalised healthcare, researchers and developers require ready access to high-quality datasets. As such data are sensitive, their use is subject to tight control, whilst the majority of data present in electronic records are not available for research use. Synthetic data are algorithmically generated but can maintain the statistical relationships present within an original dataset. In this work, I used explicitly privacy-preserving generators to create synthetic versions of the UK Biobank before we performed exploratory data analysis and prognostic model development. Comparing results when using the synthetic against the real datasets, we show the potential for synthetic data in facilitating prognostic modelling
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Applying deep learning techniques, particularly language models (LMs), in
ontology engineering has raised widespread attention. However, deep learning
frameworks like PyTorch and Tensorflow are predominantly developed for Python
programming, while widely-used ontology APIs, such as the OWL API and Jena, are
primarily Java-based. To facilitate seamless integration of these frameworks
and APIs, we present Deeponto, a Python package designed for ontology
engineering. The package encompasses a core ontology processing module founded
on the widely-recognised and reliable OWL API, encapsulating its fundamental
features in a more "Pythonic" manner and extending its capabilities to include
other essential components including reasoning, verbalisation, normalisation,
projection, and more. Building on this module, Deeponto offers a suite of
tools, resources, and algorithms that support various ontology engineering
tasks, such as ontology alignment and completion, by harnessing deep learning
methodologies, primarily pre-trained LMs. In this paper, we also demonstrate
the practical utility of Deeponto through two use-cases: the Digital Health
Coaching in Samsung Research UK and the Bio-ML track of the Ontology Alignment
Evaluation Initiative (OAEI).Comment: under review at Semantic Web Journa
Reframing museum epistemology for the information age: a discursive design approach to revealing complexity
This practice-based research inquiry examines the impact of an epistemic shift, brought about by the dawning of the information age and advances in networked communication technologies, on physical knowledge institutions - focusing on museums. The research charts adapting knowledge schemas used in museum knowledge organisation and discusses the potential for a new knowledge schema, the network, to establish a new epistemology for museums that reflects contemporary hyperlinked and networked knowledge. The research investigates the potential for networked and shared virtual reality spaces to reveal new ‘knowledge monuments’ reflecting the epistemic values of the network society and the space of flows.
The central practice for this thesis focuses on two main elements. The first is applying networks and visual complexity to reveal multi-linearity and adapting perspectives in relational knowledge networks. This concept was explored through two discursive design projects, the Museum Collection Engine, which uses data visualisation, cloud data, and image recognition within an immersive projection dome to create a dynamic and searchable museum collection that returns new and interlinking constellations of museum objects and knowledge. The second discursive design project was Shared Pasts: Decoding Complexity, an AR app with a unique ‘anti-personalisation’ recommendation system designed to reveal complex narratives around historic objects and places. The second element is folksonomy and co-design in developing new community-focused archives using the community's language to build the dataset and socially tagged metadata. This was tested by developing two discursive prototypes, Women Reclaiming AI and Sanctuary Stories
Hybrid human-AI driven open personalized education
Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer.
In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer).
All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result
Virtual Reality in Mathematics Education (VRiME):An exploration of the integration and design of virtual reality for mathematics education
This thesis explores the use of Virtual Reality (VR) in mathematics education. Four VR prototypes were designed and developed during the PhD project to teach equations, geometry, and vectors and facilitate collaboration.Paper A investigates asymmetric VR for classroom integration and collaborative learning and presents a new taxonomy of asymmetric interfaces. Paper B proposes how VR could assist students with Autism Spectrum Disorder (ASD) in learning daily living skills involving basic mathematical concepts. Paper C investigates how VR could enhance social inclusion and mathematics learning for neurodiverse students. Paper D presents a VR prototype for teaching algebra and equation-solving strategies, noting positive student responses and the potential for knowledge transfer. Paper E investigates gesture-based interaction with dynamic geometry in VR for geometry education and presents a new taxonomy of learning environments. Finally, paper F explores the use of VR to visualise and contextualise mathematical concepts to teach software engineering students.The thesis concludes that VR offers promising avenues for transforming mathematics education. It aims to broaden our understanding of VR's educational potential, paving the way for more immersive learning experiences in mathematics education
A web-based platform promoting family communication and cascade genetic testing for families with hereditary breast and ovarian cancer (DIALOGUE study)
The overall aim of this dissertation is to develop an eHealth intervention to promote family communication and cascade genetic testing among families concerned with Hereditary Breast and Ovarian Cancer (HBOC) syndrome. Within this context an international, multi-centre scientific project entitled "DIALOGUE" was designed that aims to develop (Phase A), and test the feasibility (Phase B) of an intervention within various genetic clinics across Switzerland and South Korea. This dissertation describes only the Phase A, the adaptation of an intervention, a web-based platform designed for families with HBOC to share genetic test results, including usability testing in a sample from Switzerland.
Chapter 1 provides a general introduction to the current field of hereditary cancer and cascade genetic testing, including the current state of eHealth technologies in science. The chapter also includes a short introduction to the prototype developed in the U.S.—as well as a description of the DIALOGUE study. In addition, the chapter summarises the main conceptual models, i.e. the Ottawa Decision Support Framework (ODSF) and the Medical Research Council (MRC) framework. These models are commonly implemented in the development and evaluation of complex interventions. The rational of this dissertation is guided by all of these elements.
Chapter 2 provides a detailed description of the dissertation’s specific aims, including the three studies conducted. The articles presented in Chapter 3 describe the methodology and findings of the dissertation. Study I comprises a systematic literature review of previous studies, with a particular focus on HBOC and Lynch syndromes. The literature review identified and synthesised evidence from psychoeducational interventions designed to facilitate family communication of genetic testing results and/or cancer predisposition and to promote cascade genetic testing. A meta-analysis was also conducted to assess intervention efficacy in relation to these two research aims. Our findings highlight the need to develop new interventions and approaches to family communication and cascade testing for cancer susceptibility. Study II describes the state-of-the-art text mining techniques used to detect and classify valuable information from interviews with study participants concerning determinants of open intrafamilial communication regarding genetic cancer risk. This study had two major aims: 1) to quantify openness of communication about HBOC cancer risk, and 2) to examine the role of sentiment in predicting openness of communication. Our findings showed that the overall expressed sentiment was associated with the communication of genetic risk among HBOC families. This analysis identified additional factors that affect openness to communicate genetic risk. These were defined as “high-risk” factors and integrated into the design and development of the intervention. Study III describes the development of the intervention, a web-based platform designed for families with HBOC to share genetic test results. The platform was developed in line with the quality criteria set by the MRC framework. Being web-based, the platform could be accessed via a laptop, smartphone or tablet. Usability testing was applied to evaluate the prototype intervention which received high ratings on a satisfaction scale. Chapter 4 synthesises and discusses the key findings of all the studies presented in the previous chapter, and addresses study limitations and implications for future research
Towards a human-centric data economy
Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming
a key production factor, comparable in importance to capital, land, or labour in an increasingly
digital economy. In spite of an ever-growing demand for third-party data in the B2B
market, firms are generally reluctant to share their information. This is due to the unique characteristics
of “data” as an economic good (a freely replicable, non-depletable asset holding a highly
combinatorial and context-specific value), which moves digital companies to hoard and protect
their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise
the provision of innovative services built upon them. As a result, most of those valuable assets
still remain unexploited in corporate silos nowadays.
This situation is shaping the so-called data economy around a number of champions, and it is
hampering the benefits of a global data exchange on a large scale. Some analysts have estimated
the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking
the value of data has become a central policy of the European Union, which also estimated
the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of
the European Data Strategy, the European Commission is also steering relevant initiatives aimed
to identify relevant cross-industry use cases involving different verticals, and to enable sovereign
data exchanges to realise them.
Among individuals, the massive collection and exploitation of personal data by digital firms
in exchange of services, often with little or no consent, has raised a general concern about privacy
and data protection. Apart from spurring recent legislative developments in this direction,
this concern has raised some voices warning against the unsustainability of the existing digital
economics (few digital champions, potential negative impact on employment, growing inequality),
some of which propose that people are paid for their data in a sort of worldwide data labour
market as a potential solution to this dilemma [114, 115, 155].
From a technical perspective, we are far from having the required technology and algorithms
that will enable such a human-centric data economy. Even its scope is still blurry, and the question
about the value of data, at least, controversial. Research works from different disciplines have
studied the data value chain, different approaches to the value of data, how to price data assets,
and novel data marketplace designs. At the same time, complex legal and ethical issues with
respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets
over the Internet. We carry out what is, to the best of our understanding, the most thorough survey
of commercial data marketplaces. In this work, we have catalogued and characterised ten different
business models, including those of personal information management systems, companies born
in the wake of recent data protection regulations and aiming at empowering end users to take
control of their data. We have also identified the challenges faced by different types of entities,
and what kind of solutions and technology they are using to provide their services.
Then we present a first of its kind measurement study that sheds light on the prices of data
in the market using a novel methodology. We study how ten commercial data marketplaces categorise
and classify data assets, and which categories of data command higher prices. We also
develop classifiers for comparing data products across different marketplaces, and we study the
characteristics of the most valuable data assets and the features that specific vendors use to set
the price of their data products. Based on this information and adding data products offered by
other 33 data providers, we develop a regression analysis for revealing features that correlate with
prices of data products. As a result, we also implement the basic building blocks of a novel data
pricing tool capable of providing a hint of the market price of a new data product using as inputs
just its metadata. This tool would provide more transparency on the prices of data products in
the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of
nascent markets.
Next we turn to topics related to data marketplace design. Particularly, we study how buyers
can select and purchase suitable data for their tasks without requiring a priori access to such
data in order to make a purchase decision, and how marketplaces can distribute payoffs for a
data transaction combining data of different sources among the corresponding providers, be they
individuals or firms. The difficulty of both problems is further exacerbated in a human-centric
data economy where buyers have to choose among data of thousands of individuals, and where
marketplaces have to distribute payoffs to thousands of people contributing personal data to a
specific transaction.
Regarding the selection process, we compare different purchase strategies depending on the
level of information available to data buyers at the time of making decisions. A first methodological
contribution of our work is proposing a data evaluation stage prior to datasets being selected
and purchased by buyers in a marketplace. We show that buyers can significantly improve the
performance of the purchasing process just by being provided with a measurement of the performance
of their models when trained by the marketplace with individual eligible datasets. We
design purchase strategies that exploit such functionality and we call the resulting algorithm Try
Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to
near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N)
- needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value
of spatio-temporal datasets combined in marketplaces for predicting transportation demand and
travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and
New York we show that the value of data is different for each individual, and cannot be approximated
by its volume. Our results reveal that even more complex approaches based on the
“leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value
from economics and game theory, such as the Shapley value, need to be employed if one wishes
to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms.
However, the Shapley value entails serious computational challenges. Its exact calculation
requires repetitively training and evaluating every combination of data sources and hence O(N!)
or O(2N) computational time, which is unfeasible for complex models or thousands of individuals.
Moreover, our work paves the way to new methods of measuring the value of spatio-temporal
data. We identify heuristics such as entropy or similarity to the average that show a significant
correlation with the Shapley value and therefore can be used to overcome the significant computational
challenges posed by Shapley approximation algorithms in this specific context.
We conclude with a number of open issues and propose further research directions that leverage
the contributions and findings of this dissertation. These include monitoring data transactions
to better measure data markets, and complementing market data with actual transaction prices
to build a more accurate data pricing tool. A human-centric data economy would also require
that the contributions of thousands of individuals to machine learning tasks are calculated daily.
For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff
calculation processes in data marketplaces. In that direction, we also point to some alternatives
to repetitively training and evaluating a model to select data based on Try Before You Buy and
approximate the Shapley value. Finally, we discuss the challenges and potential technologies that
help with building a federation of standardised data marketplaces.
The data economy will develop fast in the upcoming years, and researchers from different
disciplines will work together to unlock the value of data and make the most out of it. Maybe
the proposal of getting paid for our data and our contribution to the data economy finally flies,
or maybe it is other proposals such as the robot tax that are finally used to balance the power
between individuals and tech firms in the digital economy. Still, we hope our work sheds light on
the value of data, and contributes to making the price of data more transparent and, eventually, to
moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
Design of an E-learning system using semantic information and cloud computing technologies
Humanity is currently suffering from many difficult problems that threaten the life and survival of the human race. It is very easy for all mankind to be affected, directly or indirectly, by these problems. Education is a key solution for most of them. In our thesis we tried to make use of current technologies to enhance and ease the learning process.
We have designed an e-learning system based on semantic information and cloud computing, in addition to many other technologies that contribute to improving the educational process and raising the level of students. The design was built after much research on useful technology, its types, and examples of actual systems that were previously discussed by other researchers.
In addition to the proposed design, an algorithm was implemented to identify topics found in large textual educational resources. It was tested and proved to be efficient against other methods. The algorithm has the ability of extracting the main topics from textual learning resources, linking related resources and generating interactive dynamic knowledge graphs. This algorithm accurately and efficiently accomplishes those tasks even for bigger books. We used Wikipedia Miner, TextRank, and Gensim within our algorithm. Our algorithm‘s accuracy was evaluated against Gensim, largely improving its accuracy.
Augmenting the system design with the implemented algorithm will produce many useful services for improving the learning process such as: identifying main topics of big textual learning resources automatically and connecting them to other well defined concepts from Wikipedia, enriching current learning resources with semantic information from external sources, providing student with browsable dynamic interactive knowledge graphs, and making use of learning groups to encourage students to share their learning experiences and feedback with other learners.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Luis Sánchez Fernández.- Secretario: Luis de la Fuente Valentín.- Vocal: Norberto Fernández Garcí
- …