156,057 research outputs found

    Occupant Privacy Perception, Awareness, and Preferences in Smart Office Environments

    Full text link
    Building management systems tout numerous benefits, such as energy efficiency and occupant comfort but rely on vast amounts of data from various sensors. Advancements in machine learning algorithms make it possible to extract personal information about occupants and their activities beyond the intended design of a non-intrusive sensor. However, occupants are not informed of data collection and possess different privacy preferences and thresholds for privacy loss. While privacy perceptions and preferences are most understood in smart homes, limited studies have evaluated these factors in smart office buildings, where there are more users and different privacy risks. To better understand occupants' perceptions and privacy preferences, we conducted twenty-four semi-structured interviews between April 2022 and May 2022 on occupants of a smart office building. We found that data modality features and personal features contribute to people's privacy preferences. The features of the collected modality define data modality features -- spatial, security, and temporal context. In contrast, personal features consist of one's awareness of data modality features and data inferences, definitions of privacy and security, and the available rewards and utility. Our proposed model of people's privacy preferences in smart office buildings helps design more effective measures to improve people's privacy

    Privacy-Aware Data Acquisition under Data Similarity in Regression Markets

    Full text link
    Data markets facilitate decentralized data exchange for applications such as prediction, learning, or inference. The design of these markets is challenged by varying privacy preferences as well as data similarity among data owners. Related works have often overlooked how data similarity impacts pricing and data value through statistical information leakage. We demonstrate that data similarity and privacy preferences are integral to market design and propose a query-response protocol using local differential privacy for a two-party data acquisition mechanism. In our regression data market model, we analyze strategic interactions between privacy-aware owners and the learner as a Stackelberg game over the asked price and privacy factor. Finally, we numerically evaluate how data similarity affects market participation and traded data value.Comment: Submitted to IEEE Transactions on Neural Networks and Learning Systems (submission version

    Privacy preference mechanisms in Personal Data Storage (PDS).

    Get PDF
    In this thesis, we study frameworks for managing user's privacy when disclosing personal data with third parties from Personal Data Storage (PDS). PDS is a secure digital space which allows individuals to collect, store, and give access to third parties. So, PDS has inaugurated a substantial change to the way people can store and control their personal data, by moving from a service-centric to a user-centric model. Up to now, most of the research on PDS has focused on how to enforce user privacy preferences and how to secure data stored into the PDS. In contrast, this thesis aims at designing a Privacy-aware Personal Data Storage (P-PDS), that is, a PDS able to automatically take privacy-aware decisions on third parties access requests in accordance with user preferences. This thesis first demonstrates that semi-supervised learning can be successfully exploited to make a PDS able to automatically decide whether an access request has to be authorized or not. Furthermore, we have revised our first contribution by defining strategies able to obtain good accuracy without requiring too much effort from the user in the training phase. At this aim, we exploit active learning with semi-supervised approach so as to improve the quality of the labeled training dataset. This ables to improve the performance of learning models to predict user privacy preferences correctly. Moreover, in the second part of the thesis we study how user's contextual information play a vital role in term of taking decision of whether to share personal data with third parties. As such, consider that a service provider may provide a request for entertainment service to PDS owner during his/her office hours. In such case, PDS owner may deny this service as he/she is in office. That implies individual would like to accept/deny access requests by considering his/her contextual information. Prior studies on PDS have not considered user's contextual information so far. Moreover, prior research has shown that user privacy preferences may vary based on his/her contextual information. To address this issue, this thesis also focuses to implement a contextual privacy-aware framework for PDS (CP-PDS) which exploits contextual information to build a learning classifier that can predict user privacy preferences under various contextual scenarios. We run several experiments on a realistic dataset and exploiting groups of evaluators. The obtained results show the effectiveness of the proposed approaches

    Have it your way: Individualized Privacy Assignment for DP-SGD

    Full text link
    When training a machine learning model with differential privacy, one sets a privacy budget. This budget represents a maximal privacy violation that any user is willing to face by contributing their data to the training set. We argue that this approach is limited because different users may have different privacy expectations. Thus, setting a uniform privacy budget across all points may be overly conservative for some users or, conversely, not sufficiently protective for others. In this paper, we capture these preferences through individualized privacy budgets. To demonstrate their practicality, we introduce a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which supports such individualized budgets. DP-SGD is the canonical approach to training models with differential privacy. We modify its data sampling and gradient noising mechanisms to arrive at our approach, which we call Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees tailored to the preferences of individual users and their data points, we find it empirically improves privacy-utility trade-offs.Comment: Published at NeurIPS'202

    Privacy preference mechanisms in Personal Data Storage (PDS).

    Get PDF
    In this thesis, we study frameworks for managing user's privacy when disclosing personal data with third parties from Personal Data Storage (PDS). PDS is a secure digital space which allows individuals to collect, store, and give access to third parties. So, PDS has inaugurated a substantial change to the way people can store and control their personal data, by moving from a service-centric to a user-centric model. Up to now, most of the research on PDS has focused on how to enforce user privacy preferences and how to secure data stored into the PDS. In contrast, this thesis aims at designing a Privacy-aware Personal Data Storage (P-PDS), that is, a PDS able to automatically take privacy-aware decisions on third parties access requests in accordance with user preferences. This thesis first demonstrates that semi-supervised learning can be successfully exploited to make a PDS able to automatically decide whether an access request has to be authorized or not. Furthermore, we have revised our first contribution by defining strategies able to obtain good accuracy without requiring too much effort from the user in the training phase. At this aim, we exploit active learning with semi-supervised approach so as to improve the quality of the labeled training dataset. This ables to improve the performance of learning models to predict user privacy preferences correctly. Moreover, in the second part of the thesis we study how user's contextual information play a vital role in term of taking decision of whether to share personal data with third parties. As such, consider that a service provider may provide a request for entertainment service to PDS owner during his/her office hours. In such case, PDS owner may deny this service as he/she is in office. That implies individual would like to accept/deny access requests by considering his/her contextual information. Prior studies on PDS have not considered user's contextual information so far. Moreover, prior research has shown that user privacy preferences may vary based on his/her contextual information. To address this issue, this thesis also focuses to implement a contextual privacy-aware framework for PDS (CP-PDS) which exploits contextual information to build a learning classifier that can predict user privacy preferences under various contextual scenarios. We run several experiments on a realistic dataset and exploiting groups of evaluators. The obtained results show the effectiveness of the proposed approaches

    KAPUER: A Decision Support System for Privacy Policies Specification

    Get PDF
    International audienceWe are using more and more devices connected to the Internet. Our smartphones, tablets and now everyday items can share data to make our life easier. Sharing data may harm our privacy and there is a need to control them. However, this task is complex especially for non technical users. To facilitate this task, we present a decision support system, named KAPUER, that proposes high level authorization policies by learning users’ privacy preferences. KAPUER has been integrated into XACML and three learning algorithms have been evaluated

    Evolving Privacy Protections for Emerging Machine Learning Data Under Carpenter v. United States

    Get PDF
    The Fourth Amendment’s third-party doctrine eliminates an individual’s reasonable expectation of privacy in information they willingly turn over to third parties. Government scrutiny of this information is not considered a search under the Fourth Amendment and is therefore not given constitutional protections. In the 2018 case Carpenter v. United States, the Supreme Court created an exception to the third-party doctrine. In Carpenter, a case involving the warrantless use of cell site location information (CSLI) in a criminal investigation, the Court held that individuals do have a reasonable expectation of privacy regarding CSLI. According to Chief Justice Roberts, despite the necessary relinquishment of some information by all cell phone users, privacy is guaranteed “[i]n light of the deeply revealing nature of CSLI, its depth, breadth, and comprehensive reach, and the inescapable and automatic nature of its collection….” The Court’s rational in distinguishing CSLI is also applicable to the personal data that is constantly being collected by tech companies through the use of machine learning algorithms. Companies like Facebook and Google use machine learning to specifically tailor each user’s experience to their individual preferences. To do so, machine learning algorithms constantly collect, store, and analyze data about our interactions online to “learn” about our habits, ideologies, likes, dislikes, and affiliations. Given the Carpenter Court’s understanding of the constitutional complexities of high-tech communications, this comment takes the next step to explore individuals’ reasonable expectation of privacy in algorithmic learning data titrated to their personal preferences

    The Impact of Cultural Familiarity on Students’ Social Media Usage in Higher Education

    Get PDF
    Using social media (SM) in Higher education (HE) becomes unavoidable in the new teaching and learning pedagogy. The current generation of students creates their groups on SM for collaboration. However, SM can be a primary source of learning distraction due to its nature, which does not support structured learning. Hence, derived from the literature, this study proposes three learning customised system features, to be implemented on SM when used in Higher Education HE. Nevertheless, some psychological factors appear to have a stronger impact on students’ adoption of SM in learning than the proposed features. A Quantitative survey was conducted at a university in Uzbekistan to collect 52 undergraduate students’ perception of proposed SM learning customised features in Moodle. These features aim to provide localised, personalised, and privacy control self-management environment for collaboration in Moodle. These features could be significant in predicting students’ engagement with SM in HE. The data analysis showed a majority of positive feedback towards the proposed learning customised SM. However, the surveyed students’ engagement with these features was observed as minimal. The course leader initiated a semi-structured interview to investigate the reason. Although the students confirmed their acceptance of the learning customised features, their preferences to alternate SM, which is Telegram overridden their usage of the proposed learning customized SM, which is Twitter. The students avoided the Moodle integrated Twitter (which provided highly accepted features) and chose to use the Telegram as an external collaboration platform driven by their familiarity and social preferences with the Telegram since it is the popular SM in Uzbekistan. This study is part of an ongoing PhD research which involves deeper frame of learners’ cognitive usage of the learning management system. However, this paper exclusively discusses the cultural familiarity impact of student’s adoption of SM in HE

    Negotiating agents that learn about others’ preferences

    Full text link
    In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents&rsquo; preferences via past interactions. Over time, the agents can incrementally update their models of other agents&rsquo; preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situation where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).<br /
    • …
    corecore