1,524 research outputs found
The Datafication of Open Banking: A critical interrogation into the data privacy issues and cybersecurity risk implications of cross-border data flows under Canada’s proposed Open Banking Framework
Banks have always served as the chief custodians of financial data and in this role, they regulate the activities between customers, technology, and merchants. The worldwide consumer demands have added pressure to financial institutions to adopt more streamlined methods when it comes to accessing financial data. This comes at a time when the financial services industry sits on the verge of pending reforms through digitisation and crossborder transactions. Open banking is one such change which is predicted to shake up the traditional banking model and is expected to bring a plethora of benefits to both customers and the financial industry. Open banking provides access to consumer banking, transactions, and other financial information to third party providers (TPPs) via application programming interfaces (APIs). Open banking has the possibility of expanding to include user consent-based movement of information for investments, insurance, telecommunications, utilities and more. This ability to share financial data through APIs could promote faster, easier, and more secure payments, particularly crossborder transactions. There are three major challenges with open banking that this research covers. The first is that open banking introduces a consumer data portability feature at a time when there is no existing right under the current law. The second is that open banking is a consent-based system that will require a higher standard of consent from a privacy law perspective especially in relation to crossborder transactions. The third is that open banking exacerbates existing cybersecurity risks while creating new ones which may require additional protections through either the financial or privacy law regimes. It is useful to explore that each country imposes separate regulatory limits on what personal data can be transferred or stored in their markets and whether there can ultimately be interoperability of these structures for crossborder transactions. Open banking raises concern that it may become a dangerous route for criminals to trick naïve consumers into disclosing secret information, allowing illegal access to their personal data. As such, there is no room for error in rolling out open banking as a model, as its failure could result in harsh economic impacts across the financial sector
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment
Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a ‘panopticon’.
Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each student’s identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy.
The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parents’ belief in AI technologies as a panacea to student security and educational development overlooks the context in which ‘data practices’ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets ‘cooked’. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy.
The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering students’ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study
This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
The Perception of K-12 Instrumental Directors in Low-Income Areas on Virtual Learning with Skill Development and Retention
Due to the extreme measures taken to protect students from COVID-19 during the pandemic, schools closed their doors, and educators struggled to continue teaching through virtual learning platforms. Performance-based classrooms were encouraged to discover new methods and strategies to motivate students to thrive even though face-to-face rehearsals were restricted. This study examined the experiences secondary music education instrumentalists faced while attempting to utilize synchronous and asynchronous instruction in a 100 percent virtual performance-based environment. This study aimed to understand the negative and positive effects placed on secondary instrumentalists’ performance abilities, fundamental development, and participation/retention since the introduction of virtual learning in low-income areas. The focus of this study also examined the possible benefits of enhancing pedagogical skills through the addition of technological advances to push instrumental instruction and performances on the secondary level. This study followed a qualitative hermeneutic phenomenology design. Music educators in low-income DeKalb County communities were interviewed for this study. Participants were requested to share their perspectives and experiences of performance-based virtual learning and results. The study raised the need for future discussions to create and implement a state and national virtual music education guideline that would assist music educators in turning a devastating situation into a blessing for all art programs and their stakeholders
Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings
The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers
- …