1,192 research outputs found

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Doing Research. Wissenschaftspraktiken zwischen Positionierung und Suchanfrage

    Get PDF
    Forschung wird zunehmend aus Sicht ihrer Ergebnisse gedacht - nicht zuletzt aufgrund der Umwälzungen im System Wissensschaft. Der Band lenkt den Fokus jedoch auf diejenigen Prozesse, die Forschungsergebnisse erst ermöglichen und Wissenschaft konturieren. Dabei ist der Titel Doing Research als Verweis darauf zu verstehen, dass forschendes Handeln von spezifischen Positionierungen, partiellen Perspektiven und Suchbewegungen geformt ist. So knüpfen alle Beitragenden auf reflexive Weise an ihre jeweiligen Forschungspraktiken an. Ausgangspunkt sind Abkürzungen - die vermeintlich kleinsten Einheiten wissenschaftlicher Aushandlung und Verständigung. Der in den Erziehungs-, Sozial-, Medien- und Kunstwissenschaften verankerte Band zeichnet ein vieldimensionales Bild gegenwärtigen Forschens mit transdisziplinären Anknüpfungspunkten zwischen Digitalität und Bildung. (DIPF/Orig.

    Provincialising whiteness: Òyìnbó and the politics of race in Lagos, Nigeria

    Get PDF
    Much academic work on racialisation processes to date has focused on a geographically restricted range of racial regimes characterised by white supremacy. This study broadens the geographical scope of analyses by looking at race-making practices in Lagos, Nigeria. I explore the geographical specificity of race-making in Lagos through interrogation of the concept of òyìnbó – a Yorùbá word most often translated into English as ‘white person.’ By highlighting the particular meanings attached to òyìnbó, and the political work that racialisation does in this understudied context, I argue for the need to provincialise understandings of whiteness in studies of global race-making processes. The project is based upon eleven months of ethnographic fieldwork with Lagosians of different generations and social demographics at three different research sites: a senior secondary school, the University of Lagos, and at a church. My findings suggest that divergent meanings are attached to òyìnbós in these contexts, which do not universally celebrate whiteness. Rather, the practice of race-making in Lagos predominantly addresses local political concerns, and common attributes associated with òyìnbós are primarily evaluated according to local people’s own moral economy. This results in highly ambivalent attitudes to òyìnbós as individuals and to òyìnbó as trope. I suggest that these attitudes can best be explained by situating constructions of òyìnbós within their wider social context in Lagos. By centring local understandings in this way, I argue that the political practice of race-making in Lagos is not purely a reflection of a singular, global racial hierarchy, but a means of actively engaging with global and local power structures. I propose that seeking to understand the emic nature of divergent global race-making processes in this way has the potential to broaden academic understanding of these and related social phenomena

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Improving diagnostic procedures for epilepsy through automated recording and analysis of patients’ history

    Get PDF
    Transient loss of consciousness (TLOC) is a time-limited state of profound cognitive impairment characterised by amnesia, abnormal motor control, loss of responsiveness, a short duration and complete recovery. Most instances of TLOC are caused by one of three health conditions: epilepsy, functional (dissociative) seizures (FDS), or syncope. There is often a delay before the correct diagnosis is made and 10-20% of individuals initially receive an incorrect diagnosis. Clinical decision tools based on the endorsement of TLOC symptom lists have been limited to distinguishing between two causes of TLOC. The Initial Paroxysmal Event Profile (iPEP) has shown promise but was demonstrated to have greater accuracy in distinguishing between syncope and epilepsy or FDS than between epilepsy and FDS. The objective of this thesis was to investigate whether interactional, linguistic, and communicative differences in how people with epilepsy and people with FDS describe their experiences of TLOC can improve the predictive performance of the iPEP. An online web application was designed that collected information about TLOC symptoms and medical history from patients and witnesses using a binary questionnaire and verbal interaction with a virtual agent. We explored potential methods of automatically detecting these communicative differences, whether the differences were present during an interaction with a VA, to what extent these automatically detectable communicative differences improve the performance of the iPEP, and the acceptability of the application from the perspective of patients and witnesses. The two feature sets that were applied to previous doctor-patient interactions, features designed to measure formulation effort or detect semantic differences between the two groups, were able to predict the diagnosis with an accuracy of 71% and 81%, respectively. Individuals with epilepsy or FDS provided descriptions of TLOC to the VA that were qualitatively like those observed in previous research. Both feature sets were effective predictors of the diagnosis when applied to the web application recordings (85.7% and 85.7%). Overall, the accuracy of machine learning models trained for the threeway classification between epilepsy, FDS, and syncope using the iPEP responses from patients that were collected through the web application was worse than the performance observed in previous research (65.8% vs 78.3%), but the performance was increased by the inclusion of features extracted from the spoken descriptions on TLOC (85.5%). Finally, most participants who provided feedback reported that the online application was acceptable. These findings suggest that it is feasible to differentiate between people with epilepsy and people with FDS using an automated analysis of spoken seizure descriptions. Furthermore, incorporating these features into a clinical decision tool for TLOC can improve the predictive performance by improving the differential diagnosis between these two health conditions. Future research should use the feedback to improve the design of the application and increase perceived acceptability of the approach

    Towards a Digital Capability Maturity Framework for Tertiary Institutions

    Get PDF
    Background: The Digital Capability (DC) of an Institution is the extent to which the institution's culture, policies, and infrastructure enable and support digital practices (Killen et al., 2017), and maturity is the continuous improvement of those capabilities. As technology continues to evolve, it is likely to give rise to constant changes in teaching and learning, potentially disrupting Tertiary Education Institutions (TEIs) and making existing organisational models less effective. An institution’s ability to adapt to continuously changing technology depends on the change in culture and leadership decisions within the individual institutions. Change without structure leads to inefficiencies, evident across the Nigerian TEI landscape. These inefficiencies can be attributed mainly to a lack of clarity and agreement on a development structure. Objectives: This research aims to design a structure with a pathway to maturity, to support the continuous improvement of DC in TEIs in Nigeria and consequently improve the success of digital education programmes. Methods: I started by conducting a Systematic Literature Review (SLR) investigating the body of knowledge on DC, its composition, the relationship between its elements and their respective impact on the Maturity of TEIs. Findings from the review led me to investigate further the key roles instrumental in developing Digital Capability Maturity in Tertiary Institutions (DCMiTI). The results of these investigations formed the initial ideas and constructs upon which the proposed structure was built. I then explored a combination of quantitative and qualitative methods to substantiate the initial constructs and gain a deeper understanding of the relationships between elements/sub-elements. Next, I used triangulation as a vehicle to expand the validity of the findings by replicating the methods in a case study of TEIs in Nigeria. Finally, after using the validated constructs and knowledge base to propose a structure based on CMMI concepts, I conducted an expert panel workshop to test the model’s validity. Results: I consolidated the body of knowledge from the SLR into a universal classification of 10 elements, each comprising sub-elements. I also went on to propose a classification for DCMiTI. The elements/sub-elements in the classification indicate the success factors for digital maturity, which were also found to positively impact the ability to design, deploy and sustain digital education. These findings were confirmed in a UK University and triangulated in a case study of Northwest Nigeria. The case study confirmed the literature findings on the status of DCMiTI in Nigeria and provided sufficient evidence to suggest that a maturity structure would be a well-suited solution to supporting DCM in the region. I thus scoped, designed, and populated a domain-specific framework for DCMiTI, configured to support the educational landscape in Northwest Nigeria. Conclusion: The proposed DCMiTI framework enables TEIs to assess their maturity level across the various capability elements and reports on DCM as a whole. It provides guidance on the criteria that must be satisfied to achieve higher levels of digital maturity. The framework received expert validation, as domain experts agreed that the proposed Framework was well applicable to developing DCMiTI and would be a valuable tool to support TEIs in delivering successful digital education. Recommendations were made to engage in further iterations of testing by deploying the proposed framework for use in TEI to confirm the extent of its generalisability and acceptability

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
    corecore