21,574 research outputs found
The Curious Case of the PDF Converter that Likes Mozart: Dissecting and Mitigating the Privacy Risk of Personal Cloud Apps
Third party apps that work on top of personal cloud services such as Google
Drive and Dropbox, require access to the user's data in order to provide some
functionality. Through detailed analysis of a hundred popular Google Drive apps
from Google's Chrome store, we discover that the existing permission model is
quite often misused: around two thirds of analyzed apps are over-privileged,
i.e., they access more data than is needed for them to function. In this work,
we analyze three different permission models that aim to discourage users from
installing over-privileged apps. In experiments with 210 real users, we
discover that the most successful permission model is our novel ensemble method
that we call Far-reaching Insights. Far-reaching Insights inform the users
about the data-driven insights that apps can make about them (e.g., their
topics of interest, collaboration and activity patterns etc.) Thus, they seek
to bridge the gap between what third parties can actually know about users and
users perception of their privacy leakage. The efficacy of Far-reaching
Insights in bridging this gap is demonstrated by our results, as Far-reaching
Insights prove to be, on average, twice as effective as the current model in
discouraging users from installing over-privileged apps. In an effort for
promoting general privacy awareness, we deploy a publicly available privacy
oriented app store that uses Far-reaching Insights. Based on the knowledge
extracted from data of the store's users (over 115 gigabytes of Google Drive
data from 1440 users with 662 installed apps), we also delineate the ecosystem
for third-party cloud apps from the standpoint of developers and cloud
providers. Finally, we present several general recommendations that can guide
other future works in the area of privacy for the cloud
Security and usability of a personalized user authentication paradigm : insights from a longitudinal study with three healthcare organizations
Funding information: This research has been partially supported by the EU Horizon 2020 Grant 826278 "Securing Medical Data in Smart Patient-Centric Healthcare Systems" (Serums) , and the Research and Innovation Foundation (Project DiversePass: COMPLEMENTARY/0916/0182).This paper proposes a user-adaptable and personalized authentication paradigm for healthcare organizations, which anticipates to seamlessly reflect patients’ episodic and autobiographical memories to graphical and textual passwords aiming to improve the security strength of user-selected passwords and provide a positive user experience. We report on a longitudinal study that spanned over three years in which three public European healthcare organizations participated in order to design and evaluate the aforementioned paradigm. Three studies were conducted (n=169) with different stakeholders: i) a verification study aiming to identify existing authentication practices of the three healthcare organizations with diverse stakeholders (n=9); ii) a patient-centric feasibility study during which users interacted with the proposed authentication system (n=68); and iii) a human guessing attack study focusing on vulnerabilities among people sharing common experiences within location-aware images used for graphical passwords (n=92). Results revealed that the suggested paradigm scored high with regards to users’ likeability, perceived security, usability and trust, but more importantly it assists the creation of more secure passwords. On the downside, the suggested paradigm introduces password guessing vulnerabilities by individuals sharing common experiences with the end-users. Findings are expected to scaffold the design of more patient-centric knowledge-based authentication mechanisms within nowadays dynamic computation realms.PostprintPeer reviewe
Towards highly informative learning analytics
Among various trending topics that can be investigated in the field of educational technology, there is a clear and high demand for using artificial intelligence (AI) and educational data to improve the whole learning and teaching cycle. This spans from collecting and estimating the prior knowledge of learners for a certain subject to the actual learning process and its assessment. AI in education cuts across almost all educational technology disciplines and is key to many other technological innovations for educational institutions. The use of data to inform decision-making in education and training is not new, but the scope and scale of its potential impact on teaching and learning have silently increased by orders of magnitude over the last few years. The release of ChatGPT was another driver to finally make everyone aware of the potential effects of AI technology in the digital education system of today. We are now at a stage where data can be automatically harvested at previously unimagined levels of granularity and variety. Analysis of these data with AI has the potential to provide evidence-based insights into learners’ abilities and patterns of behaviour that, in turn, can provide crucial action points to guide curriculum and course design, personalised assistance, generate assessments, and the development of new educational offerings. AI in education has many connected research communities like Artificial Intelligence in Education (AIED), Educational Data Mining (EDM), or Learning Analytics (LA). LA is the term that is used for research, studies, and applications that try to understand and support the behaviour of learners based on large sets of collected data
Exploring the State of the Art in Legal QA Systems
Answering questions related to the legal domain is a complex task, primarily
due to the intricate nature and diverse range of legal document systems.
Providing an accurate answer to a legal query typically necessitates
specialized knowledge in the relevant domain, which makes this task all the
more challenging, even for human experts. QA (Question answering systems) are
designed to generate answers to questions asked in human languages. They use
natural language processing to understand questions and search through
information to find relevant answers. QA has various practical applications,
including customer service, education, research, and cross-lingual
communication. However, they face challenges such as improving natural language
understanding and handling complex and ambiguous questions. Answering questions
related to the legal domain is a complex task, primarily due to the intricate
nature and diverse range of legal document systems. Providing an accurate
answer to a legal query typically necessitates specialized knowledge in the
relevant domain, which makes this task all the more challenging, even for human
experts. At this time, there is a lack of surveys that discuss legal question
answering. To address this problem, we provide a comprehensive survey that
reviews 14 benchmark datasets for question-answering in the legal field as well
as presents a comprehensive review of the state-of-the-art Legal Question
Answering deep learning models. We cover the different architectures and
techniques used in these studies and the performance and limitations of these
models. Moreover, we have established a public GitHub repository where we
regularly upload the most recent articles, open data, and source code. The
repository is available at:
\url{https://github.com/abdoelsayed2016/Legal-Question-Answering-Review}
Mapping the Empirical Evidence of the GDPR (In-)Effectiveness: A Systematic Review
In the realm of data protection, a striking disconnect prevails between
traditional domains of doctrinal, legal, theoretical, and policy-based
inquiries and a burgeoning body of empirical evidence. Much of the scholarly
and regulatory discourse remains entrenched in abstract legal principles or
normative frameworks, leaving the empirical landscape uncharted or minimally
engaged. Since the birth of EU data protection law, a modest body of empirical
evidence has been generated but remains widely scattered and unexamined. Such
evidence offers vital insights into the perception, impact, clarity, and
effects of data protection measures but languishes on the periphery,
inadequately integrated into the broader conversation. To make a meaningful
connection, we conduct a comprehensive review and synthesis of empirical
research spanning nearly three decades (1995- March 2022), advocating for a
more robust integration of empirical evidence into the evaluation and review of
the GDPR, while laying a methodological foundation for future empirical
research
Recommended from our members
Landscape Study in Wireless and Mobile Learning in the post-16 sector
In the post-16 sector (further and higher education, and adult and community learning) there is a need to understand how wireless and mobile technologies can contribute to improving the student experience of learning, and help institutions fulfil their missions in an age of incomparably fast technological change. In the context of this interest and growing need, a Landscape Study project was commissioned by JISC through the Innovation strand of the JISC e-Learning Programme in 2004-5. Our project aims were to take a birds-eye view of developments and practice in the UK and internationally, and to communicate our findings to a broad and varied audience. The Summary report is accompanied by 3 associated reports on 'Current Uses', 'Potential Uses' and 'Strategic Aspects'. (The four reports are available in one single document here.
A qualitative study of stakeholders' perspectives on the social network service environment
Over two billion people are using the Internet at present, assisted by the mediating activities of software agents which deal with the diversity and complexity of information. There are, however, ethical issues due to the monitoring-and-surveillance, data mining and autonomous nature of software agents. Considering the context, this study aims to comprehend stakeholders' perspectives on the social network service environment in order to identify the main considerations for the design of software agents in social network services in the near future. Twenty-one stakeholders, belonging to three key stakeholder groups, were recruited using a purposive sampling strategy for unstandardised semi-structured e-mail interviews. The interview data were analysed using a qualitative content analysis method. It was possible to identify three main considerations for the design of software agents in social network services, which were classified into the following categories: comprehensive understanding of users' perception of privacy, user type recognition algorithms for software agent development and existing software agents enhancement
Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering
intelligent decision-making and a wide range of Artificial Intelligence (AI)
services across major corporations such as Google, Walmart, and AirBnb. KGs
complement Machine Learning (ML) algorithms by providing data context and
semantics, thereby enabling further inference and question-answering
capabilities. The integration of KGs with neuronal learning (e.g., Large
Language Models (LLMs)) is currently a topic of active research, commonly named
neuro-symbolic AI. Despite the numerous benefits that can be accomplished with
KG-based AI, its growing ubiquity within online services may result in the loss
of self-determination for citizens as a fundamental societal issue. The more we
rely on these technologies, which are often centralised, the less citizens will
be able to determine their own destinies. To counter this threat, AI
regulation, such as the European Union (EU) AI Act, is being proposed in
certain regions. The regulation sets what technologists need to do, leading to
questions concerning: How can the output of AI systems be trusted? What is
needed to ensure that the data fuelling and the inner workings of these
artefacts are transparent? How can AI be made accountable for its
decision-making? This paper conceptualises the foundational topics and research
pillars to support KG-based AI for self-determination. Drawing upon this
conceptual framework, challenges and opportunities for citizen
self-determination are illustrated and analysed in a real-world scenario. As a
result, we propose a research agenda aimed at accomplishing the recommended
objectives
- …