81,641 research outputs found

    Rise of big data – issues and challenges

    Get PDF
    The recent rapid rise in the availability of big data due to Internet-based technologies such as social media platforms and mobile devices has left many market leaders unprepared for handling very large, random and high velocity data. Conventionally, technologies are initially developed and tested in labs and appear to the public through media such as press releases and advertisements. These technologies are then adopted by the general public. In the case of big data technology, fast development and ready acceptance of big data by the user community has left little time to be scrutinized by the academic community. Although many books and electronic media articles are published by professionals and authors for their work on big data, there is still a lack of fundamental work in academic literature. Through survey methods, this paper discusses challenges in different aspects of big data, such as data sources, content format, data staging, data processing, and prevalent data stores. Issues and challenges related to big data, specifically privacy attacks and counter-techniques such as k-anonymity, t-closeness, l-diversity and differential privacy are discussed. Tools and techniques adopted by various organizations to store different types of big data are also highlighted. This study identifies different research areas to address such as a lack of anonymization techniques for unstructured big data, data traffic pattern determination for developing scalable data storage solutions and controlling mechanisms for high velocity data

    A Customizable k-Anonymity Model for Protecting Location Privacy

    Get PDF
    Continued advances in mobile networks and positioning technologies have created a strong market push for location-based services (LBSs). Examples include location-aware emergency services, location based service advertisement, and location sensitive billing. One of the big challenges in wide deployment of LBS systems is the privacy-preserving management of location-based data. Without safeguards, extensive deployment of location based services endangers location privacy of mobile users and exhibits significant vulnerabilities for abuse. In this paper, we describe a customizable k-anonymity model for protecting privacy of location data. Our model has two unique features. First, we provide a customizable framework to support k-anonymity with variable k, allowing a wide range of users to benefit from the location privacy protection with personalized privacy requirements. Second, we design and develop a novel spatio-temporal cloaking algorithm, called CliqueCloak, which provides location k-anonymity for mobile users of a LBS provider. The cloaking algorithm is run by the location protection broker on a trusted server, which anonymizes messages from the mobile nodes by cloaking the location information contained in the messages to reduce or avoid privacy threats before forwarding them to the LBS provider(s). Our model enables each message sent from a mobile node to specify the desired level of anonymity as well as the maximum temporal and spatial tolerances for maintaining the required anonymity. We study the effectiveness of the cloaking algorithm under various conditions using realistic location data synthetically generated using real road maps and traffic volume data. Our experiments show that the location k-anonymity model with multi-dimensional cloaking and tunable k parameter can achieve high guarantee of k anonymity and high resilience to location privacy threats without significant performance penalty

    Healthcare, lifestyle and well-being apps. Who am I sharing my data with and what for?

    Get PDF
    Today we can find healthcare, lifestyle and well-being applications – for smartphones, tablets, wearable devices - for a plethora of aspects: from monitoring pregnancy to checking on blood pressure. These apps are used in our everyday life and not only in medical settings. They hold the promise of enhancing, managing, predicting and improving our individual health and healthcare services. Anyone with a smartphone can share personal health related data with healthcare services and companies, as well as other organisations and individuals. While people use these applications social media companies harvest, archive and manage huge amounts of data –aka big data. These two issues – people sharing personal data and the harvest of these data – have serious implications for the way in which research on human subjects can be undertaken and for the ethical frameworks that regulate such research. Key concerns are privacy, anonymity and data management. This talk focuses on the ethical implications arising from the everyday life use of healthcare applications and the risks, challenges and threats of big data. It addresses three issues (1) collection, analysis and sharing of personalised social media data; (2) anonymity and (3) privacy in a context where people is encouraged to share their data

    A Privacy Preserving Framework for RFID Based Healthcare Systems

    Get PDF
    RFID (Radio Frequency IDentification) is anticipated to be a core technology that will be used in many practical applications of our life in near future. It has received considerable attention within the healthcare for almost a decade now. The technology’s promise to efficiently track hospital supplies, medical equipment, medications and patients is an attractive proposition to the healthcare industry. However, the prospect of wide spread use of RFID tags in the healthcare area has also triggered discussions regarding privacy, particularly because RFID data in transit may easily be intercepted and can be send to track its user (owner). In a nutshell, this technology has not really seen its true potential in healthcare industry since privacy concerns raised by the tag bearers are not properly addressed by existing identification techniques. There are two major types of privacy preservation techniques that are required in an RFID based healthcare system—(1) a privacy preserving authentication protocol is required while sensing RFID tags for different identification and monitoring purposes, and (2) a privacy preserving access control mechanism is required to restrict unauthorized access of private information while providing healthcare services using the tag ID. In this paper, we propose a framework (PriSens-HSAC) that makes an effort to address the above mentioned two privacy issues. To the best of our knowledge, it is the first framework to provide increased privacy in RFID based healthcare systems, using RFID authentication along with access control technique

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    Privacy in Public and the contextual conditions of agency

    Get PDF
    Current technology and surveillance practices make behaviors traceable to persons in unprecedented ways. This causes a loss of anonymity and of many privacy measures relied on in the past. These de facto privacy losses are by many seen as problematic for individual psychology, intimate relations and democratic practices such as free speech and free assembly. I share most of these concerns but propose that an even more fundamental problem might be that our very ability to act as autonomous and purposive agents relies on some degree of privacy, perhaps particularly as we act in public and semi-public spaces. I suggest that basic issues concerning action choices have been left largely unexplored, due to a series of problematic theoretical assumptions at the heart of privacy debates. One such assumption has to do with the influential conceptualization of privacy as pertaining to personal intimate facts belonging to a private sphere as opposed to a public sphere of public facts. As Helen Nissenbaum has pointed out, the notion of privacy in public sounds almost like an oxymoron given this traditional private-public dichotomy. I discuss her important attempt to defend privacy in public through her concept of ‘contextual integrity.’ Context is crucial, but Nissenbaum’s descriptive notion of existing norms seems to fall short of a solution. I here agree with Joel Reidenberg’s recent worries regarding any approach that relies on ‘reasonable expectations’ . The problem is that in many current contexts we have no such expectations. Our contexts have already lost their integrity, so to speak. By way of a functional and more biologically inspired account, I analyze the relational and contextual dynamics of both privacy needs and harms. Through an understanding of action choice as situated and options and capabilities as relational, a more consequence-oriented notion of privacy begins to appear. I suggest that privacy needs, harms and protections are relational. Privacy might have less to do with seclusion and absolute transactional control than hitherto thought. It might instead hinge on capacities to limit the social consequences of our actions through knowing and shaping our perceptible agency and social contexts of action. To act with intent we generally need the ability to conceal during exposure. If this analysis is correct then relational privacy is an important condition for autonomic purposive and responsible agency—particularly in public space. Overall, this chapter offers a first stab at a reconceptualization of our privacy needs as relational to contexts of action. In terms of ‘rights to privacy’ this means that we should expand our view from the regulation and protection of the information of individuals to questions of the kind of contexts we are creating. I am here particularly interested in what I call ‘unbounded contexts’, i.e. cases of context collapses, hidden audiences and even unknowable future agents
    • …
    corecore