1,168 research outputs found
Southern Adventist University Undergraduate Catalog 2023-2024
Southern Adventist University\u27s undergraduate catalog for the academic year 2023-2024.https://knowledge.e.southern.edu/undergrad_catalog/1123/thumbnail.jp
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment
Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a âpanopticonâ.
Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each studentâs identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy.
The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parentsâ belief in AI technologies as a panacea to student security and educational development overlooks the context in which âdata practicesâ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets âcookedâ. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy.
The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering studentsâ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools
Second-Person Surveillance: Politics of User Implication in Digital Documentaries
This dissertation analyzes digital documentaries that utilize second-person address and roleplay to make users feel implicated in contemporary refugee crises, mass incarceration in the U.S., and state and corporate surveillances. Digital documentaries are seemingly more interactive and participatory than linear film and video documentary as they are comprised of a variety of auditory, visual, and written media, utilize networked technologies, and turn the documentary audience into a documentary user. I draw on scholarship from documentary, game, new media, and surveillance studies to analyze how second-person address in digital documentaries is configured through user positioning and direct address within the works themselves, in how organizations and creators frame their productions, and in how users and players respond in reviews, discussion forums, and Letâs Plays. I build on Michael Rothbergâs theorization of the implicated subject to explore how these digital documentaries bring the user into complicated relationality with national and international crises. Visually and experientially implying that users bear responsibility to the subjects and subject matter, these works can, on the one hand, replicate modes of liberal empathy for suffering, distant âothersâ and, on the other, simulate oneâs own surveillant modes of observation or behavior to mirror it back to users and open up oneâs offline thoughts and actions as a site of critique.
This dissertation charts how second-person address shapes and limits the political potentialities of documentary projects and connects them to a lineage of direct address from educational and propaganda films, museum exhibits, and serious games. By centralizing the userâs individual experience, the interventions that second-person digital documentaries can make into social discourse change from public, institution-based education to more privatized forms of sentimental education geared toward personal edification and self-realization. Unless tied to larger initiatives or movements, I argue that digital documentaries reaffirm a neoliberal politics of individual self-regulation and governance instead of public education or collective, social intervention.
Chapter one focuses on 360-degree virtual reality (VR) documentaries that utilize the feeling of presence to position users as if among refugees and as witnesses to refugee experiences in camps outside of Europe and various dwellings in European cities. My analysis of Clouds Over Sidra (Gabo Arora and Chris Milk 2015) and The Displaced (Imraan Ismail and Ben C. Solomon 2015) shows how these VR documentaries utilize observational realism to make believable and immersive their representations of already empathetic refugees. The empathetic refugee is often young, vulnerable, depoliticized and dehistoricized and is a well-known trope in other forms of humanitarian media that continues into VR documentaries. Forced to Flee (Zahra Rasool 2017), I am Rohingya (Zahra Rasool 2017), So Leben FlĂŒchtlinge in Berlin (Berliner Morgenpost 2017), and Limbo: A Virtual Experience of Waiting for Asylum (Shehani Fernando 2017) disrupt easy immersions into realistic-looking VR experiences of stereotyped representations and user identifications and, instead, can reflect back the userâs political inaction and surveillant modes of looking.
Chapter two analyzes web- and social media messenger-based documentaries that position users as outsiders to U.S. mass incarceration. Users are noir-style co-investigators into the crime of the prison-industrial complex in Fremont County, Colorado in Prison Valley: The Prison Industry (David Dufresne and Philippe Brault 2009) and co-riders on a bus transporting prison inmatesâ loved ones for visitations to correctional facilities in Upstate New York in A Temporary Contact (Nirit Peled and Sara Kolster 2017). Both projects construct an experience of carceral constraint for users to reinscribe seeming âoutsideâ places, people, and experiences as within the continuation of the racialized and classed politics of state control through mass incarceration. These projects utilize interfaces that create a tension between replicating an exploitative hierarchy between non-incarcerated users and those subject to mass incarceration while also de-immersing users in these experiences to mirror back the userâs supposed distance from this mode of state regulation.
Chapter three investigates a type of digital game I term dataveillance simulation games, which position users as surveillance agents in ambiguously dystopian nation-states and force users to use their own critical thinking and judgment to construct the criminality of state-sanctioned surveillance targets. Project Perfect Citizen (Bad Cop Studios 2016), Orwell: Keeping an Eye on You (Osmotic Studios 2016), and Papers, Please (Lucas Pope 2013) all create a dual empathy: players empathize with bureaucratic surveillance agents while empathizing with surveillance targets whose emails, text messages, documents, and social media profiles reveal them to be ânormalâ people. I argue that while these games show criminality to be a construct, they also utilize a racialized fear of the loss of oneâs individual privacy to make players feel like they too could be surveillance targets.
Chapter four examines personalized digital documentaries that turn users and their data into the subject matter. Do Not Track (Brett Gaylor 2015), A Week with Wanda (Joe Derry Hall 2019), Stealing Ur Feelings (Noah Levenson 2019), Alfred Premium (JoĂ«l Ronez, Pierre Corbinais, and Ămilie F. Grenier 2019), How They Watch You (Nick Briz 2021), and Fairly Intelligentâą (A.M. Darke 2021) track, monitor, and confront users with their own online behavior to reflect back a corporate surveillance that collects, analyzes, and exploits user data for profit. These digital documentaries utilize emotional fear- and humor-based appeals to persuade users that these technologies are controlling them, shaping their desires and needs, and dehumanizing them through algorithmic surveillance
The Right to Vote Securely
American elections currently run on outdated and vulnerable technology. Computer science researchers have shown that voting machines and other election equipment used in many jurisdictions are plagued by serious security flaws, or even shipped with basic safeguards disabled. Making matters worse, it is unclear whether current law requires election authorities or companies to fix even the most egregious vulnerabilities in their systems, and whether voters have any recourse if they do not.
This Article argues that election law can, does, and should ensure that the right to vote is a right to vote securely. First, it argues that constitutional voting rights doctrines already prohibit election practices that fail to meet a bare minimum threshold of security. But the bare minimum is not enough to protect modern election infrastructure against sophisticated threats. This Article thus proposes new statutory measures to bolster election security beyond the constitutional baseline, with technical provisions designed to change the course of insecure election practices that have become regrettably commonplace, and to standardize best practices drawn from state-of-the-art research on election security
Machine learning and mixed reality for smart aviation: applications and challenges
The aviation industry is a dynamic and ever-evolving sector. As technology advances and becomes more sophisticated, the aviation industry must keep up with the changing trends. While some airlines have made investments in machine learning and mixed reality technologies, the vast majority of regional airlines continue to rely on inefficient strategies and lack digital applications. This paper investigates the state-of-the-art applications that integrate machine learning and mixed reality into the aviation industry. Smart aerospace engineering design, manufacturing, testing, and services are being explored to increase operator productivity. Autonomous systems, self-service systems, and data visualization systems are being researched to enhance passenger experience. This paper investigate safety, environmental, technological, cost, security, capacity, and regulatory challenges of smart aviation, as well as potential solutions to ensure future quality, reliability, and efficiency
Futures of Data Ownership: Defining Data Policies in Canadian Context
The importance of data is increasing along with its inflation in our world today. In today's world, data is becoming the primary source for innovation, knowledge, insight, and a competitive and financial advantage in the race of information procurement. This interest in acquiring and exploiting data and the current concerns regarding the privacy and security of information raises the question of who should own the data and how policies can preserve data ownership. There is a growing awareness that companies benefit disproportionately from collecting and selling personal information, driving the desire for greater individual control of personal data. As technology progresses exponentially, there is a dire need to regulate Tech organizations.
With the increasing use of personal data by tech companies, data privacy and ownership concerns have become more significant in today's society. Although governments worldwide have introduced privacy regulations to protect citizens' data, there is still a need for policies and legislation that safeguard citizens' rights, allow consumers to control their data, and implement strict measures in case of data breaches or violation of data rights.
The research project "Futures of Data Ownership - Informing Data Policies in Canadian Context" aims to explore emerging technological shifts and promote ethical use and data protection by developing data policies that consider the Canadian context. The research will employ primary and secondary research methods, including horizon scanning, semi-structured interviews, and a literature review, to inform policy and strategy development. In conclusion, the research project informs potential policies and legislation that regulate tech organizations and protect data ownership, ensuring a secure and trustworthy digital future for all
Cybersecurity: Past, Present and Future
The digital transformation has created a new digital space known as
cyberspace. This new cyberspace has improved the workings of businesses,
organizations, governments, society as a whole, and day to day life of an
individual. With these improvements come new challenges, and one of the main
challenges is security. The security of the new cyberspace is called
cybersecurity. Cyberspace has created new technologies and environments such as
cloud computing, smart devices, IoTs, and several others. To keep pace with
these advancements in cyber technologies there is a need to expand research and
develop new cybersecurity methods and tools to secure these domains and
environments. This book is an effort to introduce the reader to the field of
cybersecurity, highlight current issues and challenges, and provide future
directions to mitigate or resolve them. The main specializations of
cybersecurity covered in this book are software security, hardware security,
the evolution of malware, biometrics, cyber intelligence, and cyber forensics.
We must learn from the past, evolve our present and improve the future. Based
on this objective, the book covers the past, present, and future of these main
specializations of cybersecurity. The book also examines the upcoming areas of
research in cyber intelligence, such as hybrid augmented and explainable
artificial intelligence (AI). Human and AI collaboration can significantly
increase the performance of a cybersecurity system. Interpreting and explaining
machine learning models, i.e., explainable AI is an emerging field of study and
has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-
- âŠ