880 research outputs found

    Legal issues in automated vehicles: critically considering the potential role of consent and interactive digital interfaces

    Get PDF
    Some of the first ‘automated’ vehicles to be deployed on our roads will require a system of shared driving with a human driver. While this creates technical and operational challenges, the law must also facilitate such a transfer. One method may be to obtain the driver’s consent to share operational responsibility and to delineate legal responsibility between vehicle and driver in the event of an accident. Consent is a voluntary agreement where an individual is aware of the potential consequences of their consent, including the risks. The driver of a partially automated vehicle must be informed of potential risks before giving consent to share operational responsibility. This paper will refer to the inherent dangers associated with shared operational responsibility, in particular where there has been a request for the driver to take back control from the automated vehicle during the journey. Drivers are likely to experience delay in regaining situational awareness, making such operational transfers hazardous. It is argued that where an interactive digital interface is used to convey information, such as driver responsibility, risk and legal terms, drivers may fail to sufficiently process such communications due to fundamental weaknesses in human–machine interaction. The use of an interactive digital interface alone may be inadequate to effectively communicate information to drivers. If the problems identified are not addressed, it is argued that driver consent may be inconsequential, and fail to facilitate a predicable demarcation of legal responsibility between automated vehicles and drivers. Ongoing research into automated vehicle driver training is considered as part of the preparation required to design driver education to a level whereby drivers may be able to sufficiently understand the responsibilities involved in operating a partially automated vehicle, which has implications for future driver training, licensing and certification

    Developing a Measure of Social, Ethical, and Legal Content for Intelligent Cognitive Assistants

    Get PDF
    We address the issue of consumer privacy against the backdrop of the national priority of maintaining global leadership in artificial intelligence, the ongoing research in Artificial Cognitive Assistants, and the explosive growth in the development and application of Voice Activated Personal Assistants (VAPAs) such as Alexa and Siri, spurred on by the needs and opportunities arising out of the COVID-19 global pandemic. We first review the growth and associated legal issues of the of VAPAs in private homes, banks, healthcare, and education. We then summarize the policy guidelines for the development of VAPAs. Then, we classify these into five major categories with associated traits. We follow by developing a relative importance weight for each of the traits and categories; and suggest the establishment of a rating system related to the legal, ethical, functional, and social content policy guidelines established by these organizations. We suggest the establishment of an agency that will use the proposed rating system to inform customers of the implications of adopting a particular VAPA in their sphere

    Reframing space for ubiquitous computing: a study of a national park

    Get PDF
    Since the late-1980’s, researchers have been working on a “post-desktop” agenda for human-computer interaction known as ubiquitous computing. Visions for ubiquitous computing have been based around notions of embeddedness and invisibility: where mobile, networked and context-aware technologies are incorporated into the environments and objects of our everyday lives, and where the infrastructures required to operate them remain largely invisible. As this vision becomes partially realised, the focus of ubiquitous computing research has begun to shift towards considering the broader social and cultural aspects and implications of these developments. In addition to conceiving of their technologies as embedded and embeddable within built environments and objects, researchers are therefore beginning to recognise that they are equally embedded within social and cultural practices, interactions and productions. Particularly, as technologies find themselves in diverse environmental and social contexts, researchers are being asked to critically assess the role and potential their technologies have in both defining and shaping the spaces of our everyday lives, and the ways in which we understand them. This research provides one such critical account of ubiquitous computing, approached through the frame (and reframing) of space. Whereas human-computer interaction has long sought to learn from and mimic physical interactions with the world, where spatial metaphors and conventions have been exploited in the design and implementation of interactive systems, critical accounts of the ways in which technologies reside in and help create spaces remain relatively under explored. As such, this research examines the relationship between ubiquitous technologies, the spaces of our everyday lives and the understandings we have of them. It does so through a cross-disciplinary engagement with cultural geography and the ethnographic practices of sociology and anthropology. It reframes the notion of space inherent in ubiquitous technologies away from one that equates it to a Cartesian representation of the world, or a source of metaphors, towards one that positions it as a social and cultural production. Building on this foundation, two multi-sited ethnographic studies with a state government organisation, Parks Victoria, are presented that demonstrate various productions of space in practice. Based on analysis of these studies, a series of design inspirations are presented that reframe space as emergent and seasonal processes. Drawing on these design inspirations, two design concepts are presented that are envisioned for use within Parks Victoria: Habitat, a location-based platform for tacit knowledge, and Wayfarer, a visualisation and narrative tool for situated understandings. A reflection on these related pieces of research will then serve to highlight new, practical directions for further work in ubiquitous computing that incorporates perspectives from the social sciences, and moves beyond the typical divides between ‘work’ and ‘non-work’, ‘urban’ and ‘rural’ contexts

    Tweet for behavior change: Using social media for the dissemination of public health messages

    Get PDF
    Background: Social media public health campaigns have the advantage of tailored messaging at low cost and large reach, but little is known about what would determine their feasibility as tools for inducing attitude and behavior change. Objective: The aim of this study was to test the feasibility of designing, implementing, and evaluating a social media–enabled intervention for skin cancer prevention. Methods: A quasi-experimental feasibility study used social media (Twitter) to disseminate different message “frames” related to care in the sun and cancer prevention. Phase 1 utilized the Northern Ireland cancer charity’s Twitter platform (May 1 to July 14, 2015). Following a 2-week “washout” period, Phase 2 commenced (August 1 to September 30, 2015) using a bespoke Twitter platform. Phase 2 also included a Thunderclap, whereby users allowed their social media accounts to automatically post a bespoke message on their behalf. Message frames were categorized into 5 broad categories: humor, shock or disgust, informative, personal stories, and opportunistic. Seed users with a notable following were contacted to be “influencers” in retweeting campaign content. A pre- and postintervention Web-based survey recorded skin cancer prevention knowledge and attitudes in Northern Ireland (population 1.8 million). Results: There were a total of 417,678 tweet impressions, 11,213 engagements, and 1211 retweets related to our campaign. Shocking messages generated the greatest impressions (shock, n=2369; informative, n=2258; humorous, n=1458; story, n=1680), whereas humorous messages generated greater engagement (humorous, n=148; shock, n=147; story, n=117; informative, n=100) and greater engagement rates compared with story tweets. Informative messages, resulted in the greatest number of shares (informative, n=17; humorous, n=10; shock, n=9; story, n=7). The study findings included improved knowledge of skin cancer severity in a pre- and postintervention Web-based survey, with greater awareness that skin cancer is the most common form of cancer (preintervention: 28.4% [95/335] vs postintervention: 39.3% [168/428] answered “True”) and that melanoma is most serious (49.1% [165/336] vs 55.5% [238/429]). The results also show improved attitudes toward ultraviolet (UV) exposure and skin cancer with a reduction in agreement that respondents “like to tan” (60.5% [202/334] vs 55.6% [238/428]). Conclusions: Social media–disseminated public health messages reached more than 23% of the Northern Ireland population. A Web-based survey suggested that the campaign might have contributed to improved knowledge and attitudes toward skin cancer among the target population. Findings suggested that shocking and humorous messages generated greatest impressions and engagement, but information-based messages were likely to be shared most. The extent of behavioral change as a result of the campaign remains to be explored, however, the change of attitudes and knowledge is promising. Social media is an inexpensive, effective method for delivering public health messages. However, existing and traditional process evaluation methods may not be suitable for social media

    Potential of One-to-One Technology Uses and Pedagogical Practices: Student Agency and Participation in an Economically Disadvantaged Eighth Grade

    Get PDF
    The accelerated growth of 1:1 educational computing initiatives has challenged digital equity with a three-tiered, socioeconomic digital divide: (a) access, (b) higher order uses, and (c) user empowerment and personalization. As the access gap has been closing, the exponential increase of 1:1 devices threatens to widen the second and third digital divides. Using critical theory, specifically, critical theory of technology and critical pedagogy, and a qualitative case study design, this research explored the experiences of a middle school categorized under California criteria as “socioeconomically disadvantaged.” This study contributes to critical theory on technology within an educational setting, as well as provides voice to the experiences of teachers and students with economic disadvantages experiencing the phenomena of 1:1 computing. Using observational, interview, and school document data, this study asked the question: To what extent do 1:1 technology integration uses and associated pedagogical practices foster Margins of Maneuver in an eighth grade comprised of a student population that is predominantly economically disadvantaged? Probing two key markers of Margins of Maneuver, student agency and participation, the study found: (a) a technology-enhanced learning culture; (b) a teacher shift to facilitator roles; (c) instances of engaged, experiential, and inquiry learning and higher order technology uses; (d) in-progress efforts to strengthen student voice and self-identity. Accompanying the progress in narrowing economically based digital divides, the data also demonstrated some tension with the knowledge economy. Nevertheless, sufficient margins existed, associated with one-to-one uses and practices, to result in micro-resistances characterized by assertion of student agency and democratization potential

    The tyranny of perceived opinion: Freedom and information in the era of big data

    Get PDF
    Never before have we had access to as much information as we do today, but how do we avail ourselves of it? In parallel with the increase in the amount of information, we have created means of curating and delivering it in sophisticated ways, through the technologies of algorithms, Big Data and artificial intelligence. I examine how information is curated, and how digital technology has led to the creation of filter bubbles, while simultaneously creating closed online spaces in which people of similar opinions can congregate – echo chambers. These phenomena partly stem from our tendency towards selective exposure – a tendency to seek information that supports pre-existing beliefs, and to avoid unpleasant information. This becomes a problem when the information and the suggestions we receive, and the way we are portrayed creates expectations, and thus becomes leading. When the technologies I discuss are employed as they are today, combined with human nature, they pose a threat to liberty by undermining individuality, autonomy and the very foundation of liberal society. Liberty is an important part of our image of the good society, and this article is an attempt to analyse one way in which applications of technology can be detrimental to our society. While Alexis De Tocqueville feared the tyranny of the majority, we would do well to fear the tyranny of the algorithms and perceived opinion.publishedVersio

    Analyzing the effects of context-aware mobile design principles on student performance in undergraduate kinesiology courses

    Get PDF
    Learning occurs when content is accessed in a recursive process of awareness, exploration, reflection and resolution within one’s social context. With the rapid adoption of mobile technologies, mobile learning (m-Learning) researchers should incorporate aspects of mobile human-computer interaction research into the instructional design process. Specifically, the most visible, current definitions of and current research in m-Learning provide overviews of the learning theory informing mobility and focus on device characteristics, but do not focus on how people interact with mobile devices in their every day lives. The purpose of this convergent study was to determine what effect does the incorporation of research in mobile user context have on student learning. Six mobile design principles were extracted from literature and applied to mobile apps. Using a true experimental design, the study had 60 participants randomly assigned to treatment and control conditions. Participants in the treatment group received a series of apps designed according to the mobile design principles. The control group received a placebo app that mimicked content from the learning management system for their course. The results of the analysis of covariance procedure indicated the treatment group scored a significantly higher mean score than that of the control group. Further analysis of event tracking data indicated a statistically significant correlation between content access events and posttest scores. Students in the treatment group used their apps for less time, but had more content access events and subsequently higher posttest scores. The data suggests that m-Learning is something more than just an extension of what already exist. It is not just a luggable form of Web based learning. It’s more than a deep understanding of pedagogy or the delivery of course material to a mobile device. It requires the designer to understand instructional and software design, mobile human-computer usage patterns, and learning theory

    Privacy For Whom? A Multi-Stakeholder Exploration of Privacy Designs

    Get PDF
    Privacy is considered one of the fundamental human rights. Researchers have been investigating privacy issues in various domains, such as our physical privacy, data privacy, privacy as a legal right, and privacy designs. In the Human-Computer Interaction field, privacy researchers have been focusing on understanding people\u27s privacy concerns when they interact with computing systems, designing and building privacy-enhancing technologies to help people mitigate these concerns, and investigating how people\u27s privacy perceptions and the privacy designs influence people\u27s behaviors. Existing privacy research has been overwhelmingly focusing on the privacy needs of end-users, i.e., people who use a system or a product, such as Internet users and smartphone users. However, as our computing systems are becoming more and more complex, privacy issues within these systems have started to impact not only the end-users but also other stakeholders, and privacy-enhancing mechanisms designed for the end-users can also affect multiple stakeholders beyond the users. In this dissertation, I examine how different stakeholders perceive privacy-related issues and expect privacy designs to function across three application domains: online behavioral advertising, drones, and smart homes. I choose these three domains because they represent different multi-stakeholder environments with varying nature of complexity. In particular, these environments present the opportunities to study technology-mediated interpersonal relationships, i.e., the relationship between primary users (owners, end-users) and secondary users (bystanders), and to investigate how these relationships influence people\u27s privacy perceptions and their desired ways of privacy protection. Through a combination of qualitative, quantitative, and design methods, including interviews, surveys, participatory designs, and speculative designs, I present how multi-stakeholder considerations change our understandings of privacy and influence privacy designs. I draw design implications from the study results and guide future privacy designs to consider the needs of different stakeholders, e.g., cooperative mechanisms that aim to enhance the communication between primary and secondary users. In addition, this methodological approach allows researchers to directly and proactively engage with multiple stakeholders and explore their privacy perceptions and expected privacy designs. This is different from what has been commonly used in privacy literature and as such, points to a methodological contribution. Finally, this dissertation shows that when applying the theory of Contextual Integrity in a multi-stakeholder environment, there are hidden contextual factors that may alter the contextual informational norms. I present three examples from the study results and argue that it is necessary to carefully examine such factors in order to clearly identify the contextual norms. I propose a research agenda to explore best practices of applying the theory of Contextual Integrity in a multi-stakeholder environment

    Understanding spatial media

    Get PDF
    Over the past decade a new set of spatial and locative technologies have been rolled out, including online, interactive mapping tools with accompanying application programming interfaces (APIs), interactive virtual globes, user-generated spatial databases and mapping systems, locative media, urban dashboards and citizen reporting geo-systems; and geodesign and architectural and planning tools. In addition, social media produces spatial (meta)data that can be analysed geographically. These technologies, their practices, and the effects they engender have been referred to in a number of ways, including the geoweb, neogeography, volunteered geographic information (VGI), and locative media, which collectively constitute spatial media. This chapter untangles and defines these terms before setting out the transformative effects of spatial media with respect to some fundamental geographic and social concepts: spatial data/information; mapping; space and spatiality; mobility, spatial practices and spatial imaginaries; and knowledge politics. We conclude by setting out some questions for further consideration

    Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment

    Get PDF
    Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a ‘panopticon’. Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each student’s identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy. The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parents’ belief in AI technologies as a panacea to student security and educational development overlooks the context in which ‘data practices’ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets ‘cooked’. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy. The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering students’ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools
    • 

    corecore