69,316 research outputs found

    The Evolution of Embedding Metadata in Blockchain Transactions

    Get PDF
    The use of blockchains is growing every day, and their utility has greatly expanded from sending and receiving crypto-coins to smart-contracts and decentralized autonomous organizations. Modern blockchains underpin a variety of applications: from designing a global identity to improving satellite connectivity. In our research we look at the ability of blockchains to store metadata in an increasing volume of transactions and with evolving focus of utilization. We further show that basic approaches to improving blockchain privacy also rely on embedding metadata. This paper identifies and classifies real-life blockchain transactions embedding metadata of a number of major protocols running essentially over the bitcoin blockchain. The empirical analysis here presents the evolution of metadata utilization in the recent years, and the discussion suggests steps towards preventing criminal use. Metadata are relevant to any blockchain, and our analysis considers primarily bitcoin as a case study. The paper concludes that simultaneously with both expanding legitimate utilization of embedded metadata and expanding blockchain functionality, the applied research on improving anonymity and security must also attempt to protect against blockchain abuse.Comment: 9 pages, 6 figures, 1 table, 2018 International Joint Conference on Neural Network

    A Human-centric Perspective on Digital Consenting: The Case of GAFAM

    Get PDF
    According to different legal frameworks such as the European General Data Protection Regulation (GDPR), an end-user's consent constitutes one of the well-known legal bases for personal data processing. However, research has indicated that the majority of end-users have difficulty in understanding what they are consenting to in the digital world. Moreover, it has been demonstrated that marginalized people are confronted with even more difficulties when dealing with their own digital privacy. In this research, we use an enactivist perspective from cognitive science to develop a basic human-centric framework for digital consenting. We argue that the action of consenting is a sociocognitive action and includes cognitive, collective, and contextual aspects. Based on the developed theoretical framework, we present our qualitative evaluation of the consent-obtaining mechanisms implemented and used by the five big tech companies, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM). The evaluation shows that these companies have failed in their efforts to empower end-users by considering the human-centric aspects of the action of consenting. We use this approach to argue that their consent-obtaining mechanisms violate principles of fairness, accountability and transparency. We then suggest that our approach may raise doubts about the lawfulness of the obtained consent—particularly considering the basic requirements of lawful consent within the legal framework of the GDPR

    Technology, autonomy, and manipulation

    Get PDF
    Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world

    How Do Tor Users Interact With Onion Services?

    Full text link
    Onion services are anonymous network services that are exposed over the Tor network. In contrast to conventional Internet services, onion services are private, generally not indexed by search engines, and use self-certifying domain names that are long and difficult for humans to read. In this paper, we study how people perceive, understand, and use onion services based on data from 17 semi-structured interviews and an online survey of 517 users. We find that users have an incomplete mental model of onion services, use these services for anonymity and have varying trust in onion services in general. Users also have difficulty discovering and tracking onion sites and authenticating them. Finally, users want technical improvements to onion services and better information on how to use them. Our findings suggest various improvements for the security and usability of Tor onion services, including ways to automatically detect phishing of onion services, more clear security indicators, and ways to manage onion domain names that are difficult to remember.Comment: Appeared in USENIX Security Symposium 201

    Investigating the tension between cloud-related actors and individual privacy rights

    Get PDF
    Historically, little more than lip service has been paid to the rights of individuals to act to preserve their own privacy. Personal information is frequently exploited for commercial gain, often without the person’s knowledge or permission. New legislation, such as the EU General Data Protection Regulation Act, has acknowledged the need for legislative protection. This Act places the onus on service providers to preserve the confidentiality of their users’ and customers’ personal information, on pain of punitive fines for lapses. It accords special privileges to users, such as the right to be forgotten. This regulation has global jurisdiction covering the rights of any EU resident, worldwide. Assuring this legislated privacy protection presents a serious challenge, which is exacerbated in the cloud environment. A considerable number of actors are stakeholders in cloud ecosystems. Each has their own agenda and these are not necessarily well aligned. Cloud service providers, especially those offering social media services, are interested in growing their businesses and maximising revenue. There is a strong incentive for them to capitalise on their users’ personal information and usage information. Privacy is often the first victim. Here, we examine the tensions between the various cloud actors and propose a framework that could be used to ensure that privacy is preserved and respected in cloud systems

    Blind Interference Alignment for Private Information Retrieval

    Get PDF
    Blind interference alignment (BIA) refers to interference alignment schemes that are designed only based on channel coherence pattern knowledge at the transmitters (the "blind" transmitters do not know the exact channel values). Private information retrieval (PIR) refers to the problem where a user retrieves one out of K messages from N non-communicating databases (each holds all K messages) without revealing anything about the identity of the desired message index to any individual database. In this paper, we identify an intriguing connection between PIR and BIA. Inspired by this connection, we characterize the information theoretic optimal download cost of PIR, when we have K = 2 messages and the number of databases, N, is arbitrary

    Introduction to Data Ethics

    Get PDF
    An Introduction to data ethics, focusing on questions of privacy and personal identity in the economic world as it is defined by big data technologies, artificial intelligence, and algorithmic capitalism. Originally published in The Business Ethics Workshop, 3rd Edition, by Boston Acacdemic Publishing / FlatWorld Knowledge
    • …
    corecore