1,200 research outputs found

    Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment

    Get PDF
    Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a ā€˜panopticonā€™. Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each studentā€™s identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy. The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parentsā€™ belief in AI technologies as a panacea to student security and educational development overlooks the context in which ā€˜data practicesā€™ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets ā€˜cookedā€™. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy. The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering studentsā€™ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well

    A data taxonomy for adaptive multifactor authentication in the internet of health care things

    Get PDF
    The health care industry has faced various challenges over the past decade as we move toward a digital future where services and data are available on demand. The systems of interconnected devices, users, data, and working environments are referred to as the Internet of Health Care Things (IoHT). IoHT devices have emerged in the past decade as cost-effective solutions with large scalability capabilities to address the constraints on limited resources. These devices cater to the need for remote health care services outside of physical interactions. However, IoHT security is often overlooked because the devices are quickly deployed and configured as solutions to meet the demands of a heavily saturated industry. During the COVID-19 pandemic, studies have shown that cybercriminals are exploiting the health care industry, and data breaches are targeting user credentials through authentication vulnerabilities. Poor password use and management and the lack of multifactor authentication security posture within IoHT cause a loss of millions according to the IBM reports. Therefore, it is important that health care authentication security moves toward adaptive multifactor authentication (AMFA) to replace the traditional approaches to authentication. We identified a lack of taxonomy for data models that particularly focus on IoHT data architecture to improve the feasibility of AMFA. This viewpoint focuses on identifying key cybersecurity challenges in a theoretical framework for a data model that summarizes the main components of IoHT data. The data are to be used in modalities that are suited for health care users in modern IoHT environments and in response to the COVID-19 pandemic. To establish the data taxonomy, a review of recent IoHT papers was conducted to discuss the related work in IoHT data management and use in next-generation authentication systems. Reports, journal articles, conferences, and white papers were reviewed for IoHT authentication data technologies in relation to the problem statement of remote authentication and user management systems. Only publications written in English from the last decade were included (2012-2022) to identify key issues within the current health care practices and their management of IoHT devices. We discuss the components of the IoHT architecture from the perspective of data management and sensitivity to ensure privacy for all users. The data model addresses the security requirements of IoHT users, environments, and devices toward the automation of AMFA in health care. We found that in health care authentication, the significant threats occurring were related to data breaches owing to weak security options and poor user configuration of IoHT devices. The security requirements of IoHT data architecture and identified impactful methods of cybersecurity for health care devices, data, and their respective attacks are discussed. Data taxonomy provides better understanding, solutions, and improvements of user authentication in remote working environments for security features

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Intelligent interface agents for biometric applications

    Get PDF
    This thesis investigates the benefits of applying the intelligent agent paradigm to biometric identity verification systems. Multimodal biometric systems, despite their additional complexity, hold the promise of providing a higher degree of accuracy and robustness. Multimodal biometric systems are examined in this work leading to the design and implementation of a novel distributed multi-modal identity verification system based on an intelligent agent framework. User interface design issues are also important in the domain of biometric systems and present an exceptional opportunity for employing adaptive interface agents. Through the use of such interface agents, system performance may be improved, leading to an increase in recognition rates over a non-adaptive system while producing a more robust and agreeable user experience. The investigation of such adaptive systems has been a focus of the work reported in this thesis. The research presented in this thesis is divided into two main parts. Firstly, the design, development and testing of a novel distributed multi-modal authentication system employing intelligent agents is presented. The second part details design and implementation of an adaptive interface layer based on interface agent technology and demonstrates its integration with a commercial fingerprint recognition system. The performance of these systems is then evaluated using databases of biometric samples gathered during the research. The results obtained from the experimental evaluation of the multi-modal system demonstrated a clear improvement in the accuracy of the system compared to a unimodal biometric approach. The adoption of the intelligent agent architecture at the interface level resulted in a system where false reject rates were reduced when compared to a system that did not employ an intelligent interface. The results obtained from both systems clearly express the benefits of combining an intelligent agent framework with a biometric system to provide a more robust and flexible application

    A lightweight and secure multilayer authentication scheme for wireless body area networks in healthcare system

    Get PDF
    Wireless body area networks (WBANs) have lately been combined with different healthcare equipment to monitor patients' health status and communicate information with their healthcare practitioners. Since healthcare data often contain personal and sensitive information, it is important that healthcare systems have a secure way for users to log in and access resources and services. The lack of security and presence of anonymous communication in WBANs can cause their operational failure. There are other systems in this area, but they are vulnerable to offline identity guessing attacks, impersonation attacks in sensor nodes, and spoofing attacks in hub node. Therefore, this study provides a secure approach that overcomes these issues while maintaining comparable efficiency in wireless sensor nodes and mobile phones. To conduct the proof of security, the proposed scheme uses the Scyther tool for formal analysis and the Canettiā€“Krawczyk (CK) model for informal analysis. Furthermore, the suggested technique outperforms the existing symmetric and asymmetric encryption-based schemes

    A Novel Authentication Method That Combines Honeytokens and Google Authenticator

    Get PDF
    Despite the rapid development of technology, computer systems still rely heavily on passwords for security, which can be problematic. Although multi-factor authentication has been introduced, it is not completely effective against more advanced attacks. To address this, this study proposes a new two-factor authentication method that uses honeytokens. Honeytokens and Google Authenticator are combined to create a stronger authentication process. The proposed approach aims to provide additional layers of security and protection to computer systems, increasing their overall security beyond what is currently provided by single-password or standard two-factor authentication methods. The key difference is that the proposed system resembles a two-factor authentication but, in reality, works like a multi-factor authentication system. Multi-factor authentication (MFA) is a security technique that verifies a userā€™s identity by requiring multiple credentials from distinct categories. These typically include knowledge factors (something the user knows, such as a password or PIN), possession factors (something the user has, such as a mobile phone or security token), and inherence factors (something the user is, such as a biometric characteristic like a fingerprint). This multi-tiered approach significantly enhances protection against potential attacks. We examined and evaluated our systemā€™s robustness against various types of attacks. From the userā€™s side, the system is as friendly as a two-factor authentication method with an authenticator and is more secure
    • ā€¦
    corecore