6 research outputs found

    Individual Online Routines: External Guardianship, Personal Guardianship, and the Influence of Breaches

    Get PDF
    Computer crime increases in frequency and cost each year. Of all computer crimes, data breaches are the costliest to organizations. In addition to the harm data breaches cause to organizations, these breaches often involve the exposure of individuals personal data, placing the affected individuals at greater risk of computer crimes such as credit card fraud, tax fraud, and identity theft. Despite the breadth and severity of consequences for individuals, existing IS literature lacks coverage of how users respond to data breaches. Routine Activity Theory provides the studys theoretical frame. Routine Activity Theory states that crime occurs when the routine activities of a potential target place them in proximity to a motivated offender in the absence of a capable guardian. This work examines in detail the target-guardian dyad. Using semi-structured interviews, we inquire into potential antecedents to users beliefs about external guardians, how users beliefs about external guardians affect users online routines, and how this process alters in the aftermath of a data breach. This study employs a qualitative case study design to explore, at an individual level, the process by which users outside organizations determine their online routines, in light of their reliance for data protection on external guardians over which they have little to no control, and how the process is affected by awareness of a data breach. The cases selected are 1) the 2017 data breach at the consumer credit agency Equifax and 2) the Facebook Cambridge Analytica data compromise that became public in 2018. Our findings show that users individual, situational, and data characteristics affect users\u27 external guardianship beliefs and online routines. Additionally, under certain circumstances, users can fail to identify data guardians or develop adversarial feelings towards organizations that act as data guardians through control of user data. With some well-defined limitations, after data breaches users report changes in individual characteristics, perceptions of situational and data characteristics, and online routines. Based on these findings, we draw conclusions for future research and practice

    Resignation or Resistance? Examining the Digital Privacy Attitudes and Behaviours of East Yorkers

    Get PDF
    Digital technologies have become enmeshed in everyday life causing the public to become exposed to potential privacy risks through data collection and aggregation practices. Further, the upsurge in use of social networking platforms has also created opportunities for privacy violations through institutional and social surveillance. Employing a qualitative thematic analysis, this study explores how adults (N=101) living in East York, Toronto, navigate privacy through their use of the internet and digital services. Participants expressed feelings of mistrust, loss of control, resignation, and perceived self-unimportance with regards to their digital data. Importantly, others noted their desire and attempts to gain agency when using online services. This study provides support for the rich and developing body of literature on the sociology of resignation; as such, it challenges the notion that digital users are unconcerned about their data online and argues for a re-evaluation of the informed and empowered actor metaphor at the heart of the privacy paradox debate

    Individual values of GenZ in managing their Internet Privacy: a decision analytic assessment

    Get PDF
    A nossa investigação coloca a importĂąncia dos valores individuais como o centro de qualquer discussĂŁo sobre questĂ”es de privacidade. Os valores tĂȘm um papel essencial no discurso cientĂ­fico. Notamos que o conceito de valores Ă© um dos poucos discutidos e utilizados em vĂĄrias disciplinas das ciĂȘncias sociais. Para isso, nesta investigação, apresentamos objetivos baseados em valores para a privacidade na Internet da GenZ. Os objetivos sĂŁo classificados em duas categorias - os objetivos fundamentais e os meios para os atingir. Em sĂ­ntese, os nossos seis objetivos fundamentais orientam a gestĂŁo das questĂ”es de privacidade da Internet da GenZ. Os objetivos sĂŁo: Aumentar a confiança nas interaçÔes online; Maximizar a responsabilidade dos detentores de dados; Maximizar o direito Ă  privacidade; Maximizar a capacidade individual de gerir o controlo da privacidade; Maximizar a percepção da funcionalidade da plataforma; Garantir que os dados pessoais nĂŁo sĂŁo alterados. Coletivamente, os objetivos fundamentais e de meios sĂŁo uma base valiosa para a GenZ avaliar a sua postura de privacidade. Os objetivos tambĂ©m sĂŁo Ășteis para que as empresas de media social e outras plataformas relacionadas elaborem as suas polĂ­ticas de privacidade de acordo com o que a GenZ deseja. Finalmente, os objetivos sĂŁo uma ajuda Ăștil para o desenvolvimento de leis e regulamentos; Individual values of GenZ in managing their Internet Privacy: a decision analytic assessment Abstract: Online privacy is a growing concern. As individuals and businesses connect, the problem of privacy continues to remain significant. In this thesis, we address three primary questions - What are the individual values of GenZ concerning online privacy? What are the fundamental objectives of GenZ in terms of protecting their online privacy? What are the means objectives GenZ consider for protecting their online privacy? We argue that online privacy for GenZ is vital to protect. We also argue that protection can be ensured if we understand and know what privacy-related values behold GenZ and define their objectives accordingly. Our research brings the importance of individual values to be central to any discussion of privacy concerns. Values have an essential place in scientific discourse. We note that the concept of values is one of the very few discussed and employed across several social science disciplines. To that effect, in this research, we present value-based objectives for GenZ internet privacy. The objectives are classified into two categories – the fundamental objectives and the means to achieve them. In a final synthesis, our six fundamental objectives guide the management of GenZ Internet Privacy Concerns. The objectives are: Increase trust in online interactions; Maximize responsibility of data custodians; Maximize right to be left alone; Maximize individual ability to manage privacy controls; Maximize awareness of platform functionality; Ensure that personal data does not change. Collectively our fundamental and means objectives are a valuable basis for GenZ to evaluate their privacy posture. The objectives are also helpful for the social media companies and other related platforms to design their privacy policies according to the way GenZ wants. Finally, the objectives are a helpful policy aid for developing laws and regulations

    Jumping into the Cloud: Privacy, Security and Trust of Cloud-Based Computing Within K-12 American Public Education

    Full text link
    The purpose of this study is to gain a deeper understanding of how faculty view Cloud-based computing, how they perceive issues of privacy, security, and trust when using Cloud-based systems in schools, and what differences, if any, exist between their at home use of Cloud-based computer systems and their use of these and similar systems at work. Educators who took part in this study (a) demonstrated a relatively good understanding of the Cloud; (b) perceived the issues of privacy, security, and trust as related to Cloud-based computing as a serious matter, which strongly influenced their acceptance of the Cloud, and to a lesser extent, their use of the Cloud; and (c) had noticeable differences in their perceptions of the Cloud when used for school related tasks, and then, as used for personal, non-work related tasks. The theoretical framework utilized is an adaption of F.D. Davis’s 1989 Technology Acceptance Model, which according to Venkatesh (2000), is the most widely applied model of users\u27 acceptance and usage. Findings from this study inform efforts to improve educators’ understanding of the Cloud as a dynamic technology with constantly evolving trade offs of convenience that are increasingly becoming the enemy of privacy, security, and trust

    Enhancing user's privacy : developing a model for managing and testing the lifecycle of consent and revocation

    Get PDF
    Increasingly, people turn to the Internet for access to services, which often require disclosure of a significant amount of personal data. Networked technologies have enabled an explosive growth in the collection, storage and processing of personal information with notable commercial potential. However, there are asymmetries in relation to how people are able to control their own information when handled by enterprises. This raises significant privacy concerns and increases the risk of privacy breaches, thus creating an imperative need for mechanisms offering information control functionalities. To address the lack of controls in online environments, this thesis focuses on consent and revocation mechanisms to introduce a novel approach for controlling the collection, usage and dissemination of personal data and managing privacy ex- pectations. Drawing on an extensive multidisciplinary review on privacy and on empirical data from focus groups, this research presents a mathematical logic as the foundation for the management of consent and revocation controls in technological systems. More specifically, this work proposes a comprehensive conceptual model for con- sent and revocation and introduces the notion of 'informed revocation'. Based on this model, a Hoare-style logic is developed to capture the effects of expressing indi- viduals' consent and revocation preferences. The logic is designed to support certain desirable properties, defined as healthiness conditions. Proofs that these conditions hold are provided with the use of Maude software. This mathematical logic is then verified in three real-world case study applications with different consent and revocation requirements for the management of employee data in a business envi- ronment, medical data in a biobank and identity assurance in government services. The results confirm the richness and the expressiveness of the logic. In addition, a novel testing strategy underpinned by this logic is presented. This strategy is able to generate testing suites for systems offering consent and revocation controls, such as the EnCoRe system, where testing was carried out successfully and resulted in identifying faults in the EnCoRe implementation

    Trust me! Vorschlag zum Umgang mit der Vertrauensfrage im digitalen Zeitalter

    Get PDF
    Die Arbeit behandelt die Frage, wie in Zeiten umfassender, technologisch beförderter VerĂ€nderung, Vertrauen als wirksames Instrument selbstbestimmten Handelns dienen kann. Sie orientiert sich dabei an Luhmanns Begriff rationalen, zur KomplexitĂ€tsreduktion dienenden Vertrauens. Sie gliedert sich in zwei Teile. Im ersten Teil wird der digital geprĂ€gte Alltag als Grundlage fĂŒr Vertrauen betrachtet. Dazu wird der Begriff eines „digitalen Systems“ eingefĂŒhrt. Dieser dient als ErklĂ€rungsmodell, das den Systembegriff aus der Systemtheorie aufgreift und darin Merkmale sozialer und technischer Systeme zusammenfĂŒhrt. Es wird argumentiert, dass digitale Kommunikation und der Code, der dieser zugrundeliegt, das sozialen System Gesellschaft zunehmend gestalten und strukturell „ordnen.” Damit wird das Vermögen, Daten zu verarbeiten, und die VerfĂŒgungsgewalt ĂŒber diese Daten zur Voraussetzung fĂŒr Macht und Teilhabe. Die Freigabe von Daten wird zum digital anschlussfĂ€higen Vertrauenserweis. Inhaltlich fokussiert der erste Teil auf gesellschaftliche Praktiken der Datenerhebung und -verwertung. Es wird aufgezeigt, wie sich Kommunikations- und Kooperationsmechanismen verĂ€ndern und neue Machtstrukturen mit Tendenz zu einem totalen System entstehen. ErgĂ€nzend werden mithilfe soziologischer und historischer Konzepte einige GrundzĂŒge digital determinierter Ordnung herausgearbeitet, und es erfolgt eine AnnĂ€herung an deren ideologischen Unterbau. Dieser wird auf die PrĂ€missen ‚Maschinen>Menschen‘ und ‚tertium non datur‘ zurĂŒckgefĂŒhrt. Im zweiten Teil wird untersucht, wie der Einzelne im digitalen Alltag Vertrauen zur Grundlage rationalen und gestaltenden Handelns machen kann. Dazu werden zunĂ€chst Vertrauen und Misstrauen als „Mechanismen“ mit bestimmten Funktionen und Kosten betrachtet. Im Anschluss erfolgt, angelehnt an ein Modell von Kelton et al., eine Dekonstruktion des Vertrauensbegriffs und eine Spiegelung vertrauensrelevanter Kriterien an Erkenntnissen aus Wissenschaft und Praxis. Untersucht werden: 1. Vorbedingungen dafĂŒr, dass Vertrauen benötigt wird und entstehen kann (Ungewissheit, AbhĂ€ngigkeit, Verletzbarkeit). Dieser Abschnitt befasst sich mit Machtasymmetrien und Verletzungsmöglichkeiten durch die intransparente Verarbeitung von Daten. 2. Stufen des Vertrauensaufbaus (GefĂŒhlsbindung, Vertrautheit, Eigenkontrolle, Fremdkontrolle und Sinn). Gezeigt wird, wie diese instrumentalisiert werden können und wie insbesondere arational wirkende Mechanismen den Anschein persönlichen Vertrauens und gemeinsamen Sinns befördern können. Betrachtet wird auch die Rolle von Wahrheit, von Erwartungen, Deutungsangeboten und Kommunikationsmustern. Es wird gezeigt, welche Faktoren das AusĂŒben vertrauensstĂŒtzender Kontrolle behindern – und wie rationales Vertrauen dennoch gelernt werden kann. 3. Rahmenbedingungen, die das Vertrauen prĂ€gen (Selbstvertrauen, das Vertrauen der anderen, Kontext). In diesem Abschnitt wird u.a. beleuchtet, wie technische Voreinstellungen soziale Praktiken befördern und wann einer augenscheinlichen Vertrauensbeziehung keine belastbare Vertrauenspraxis zugrundeliegt. Dies berĂŒhrt unter anderem die Zuweisung von Risiko und Gefahr. Außerdem werden einige gesetzliche, technische und ökonomische Rahmenbedingungen fĂŒr rationales Vertrauen aufgefĂŒhrt. 4. Anzeichen fĂŒr VertrauenswĂŒrdigkeit (Kompetenz, Berechenbarkeit, Wohlwollen, RĂŒcksichtnahme und Ethik). Es wird argumentiert, dass sich das Vertrauen im digitalen System ĂŒberwiegend auf einen imaginierten Vertrauenspartner richtet, und mit anderen Vertrauensformen verglichen. In der Auseinandersetzung mit der Praxis fokussiert dieser Abschnitt auf die Möglichkeiten und Grenzen algorithmischer Entscheidungsfindung, unter besonderer BerĂŒcksichtigung des Machtanspruchs im Begriff „Ethische KI“. Anhaltspunkte fĂŒr Ethik werden in einem separaten Kapitel (unter Setzen einer Vertrauensvermutung und Einziehen von „Lernschwellen“ fĂŒr eventuell notwendiges Misstrauen) weiter vertieft. Es wird aufgezeigt, wie die rationale Auseinandersetzung mit Vertrauen in letzter Instanz auf die Sinnfrage hinfĂŒhrt
    corecore