45,232 research outputs found

    Privacy protection in trust models for agent societies

    Get PDF
    Proceeding of: International Joint Conference SOCO'14-CISIS'14-ICEUTE'14, Bilbao, Spain, June 25th–27th, 2014In this paper we have motivated the use of privacy-protection measures in trust models, both in conscious exchanges of opinions and in an unconscious way when security attacks take place. Most of the privacy dimensions are concerned into trust communications. In particular we define the privacy rights that these trusting communications must legally be guaranteed. From them, we describe additional message exchanges that, acting as control mechanisms, would be required to exercise such rights. Furthermore, we also enumerated the corresponding privacy violations that would have taken place if these control mechanisms were ignored. From the possible existence of privacy violations, regulatory structures may establish what agents are allowed and forbidden to do according to the legal privacy rights. We have applied the control mechanisms as additional message exchanges to a particular application domain (the Agent Trust and Reputation testbed) implemented as JADE interaction protocols, and finally we plan to define an Electronic Institution that would rule the corresponding norms and violations to such control using the Islander specification tool.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485

    Trust and Privacy Permissions for an Ambient World

    Get PDF
    Ambient intelligence (AmI) and ubiquitous computing allow us to consider a future where computation is embedded into our daily social lives. This vision raises its own important questions and augments the need to understand how people will trust such systems and at the same time achieve and maintain privacy. As a result, we have recently conducted a wide reaching study of people’s attitudes to potential AmI scenarios with a view to eliciting their privacy concerns. This chapter describes recent research related to privacy and trust with regard to ambient technology. The method used in the study is described and findings discussed

    Trust in social machines: the challenges

    No full text
    The World Wide Web has ushered in a new generation of applications constructively linking people and computers to create what have been called ‘social machines.’ The ‘components’ of these machines are people and technologies. It has long been recognised that for people to participate in social machines, they have to trust the processes. However, the notions of trust often used tend to be imported from agent-based computing, and may be too formal, objective and selective to describe human trust accurately. This paper applies a theory of human trust to social machines research, and sets out some of the challenges to system designers

    Personalised privacy in pervasive and ubiquitous systems

    Get PDF
    Our world is edging closer to the realisation of pervasive systems and their integration in our everyday life. While pervasive systems are capable of offering many benefits for everyone, the amount and quality of personal information that becomes available raise concerns about maintaining user privacy and create a real need to reform existing privacy practices and provide appropriate safeguards for the user of pervasive environments. This thesis presents the PERSOnalised Negotiation, Identity Selection and Management (PersoNISM) system; a comprehensive approach to privacy protection in pervasive environments using context aware dynamic personalisation and behaviour learning. The aim of the PersoNISM system is twofold: to provide the user with a comprehensive set of privacy protecting tools and to help them make the best use of these tools according to their privacy needs. The PersoNISM system allows users to: a) configure the terms and conditions of data disclosure through the process of privacy policy negotiation, which addresses the current “take it or leave it” approach; b) use multiple identities to interact with pervasive services to avoid the accumulation of vast amounts of personal information in a single user profile; and c) selectively disclose information based on the type of information, who requests it, under what context, for what purpose and how the information will be treated. The PersoNISM system learns user privacy preferences by monitoring the behaviour of the user and uses them to personalise and/or automate the decision making processes in order to unburden the user from manually controlling these complex mechanisms. The PersoNISM system has been designed, implemented, demonstrated and evaluated during three EU funded projects

    Security in Pervasive Computing: Current Status and Open Issues

    Get PDF
    Million of wireless device users are ever on the move, becoming more dependent on their PDAs, smart phones, and other handheld devices. With the advancement of pervasive computing, new and unique capabilities are available to aid mobile societies. The wireless nature of these devices has fostered a new era of mobility. Thousands of pervasive devices are able to arbitrarily join and leave a network, creating a nomadic environment known as a pervasive ad hoc network. However, mobile devices have vulnerabilities, and some are proving to be challenging. Security in pervasive computing is the most critical challenge. Security is needed to ensure exact and accurate confidentiality, integrity, authentication, and access control, to name a few. Security for mobile devices, though still in its infancy, has drawn the attention of various researchers. As pervasive devices become incorporated in our day-to-day lives, security will increasingly becoming a common concern for all users - - though for most it will be an afterthought, like many other computing functions. The usability and expansion of pervasive computing applications depends greatly on the security and reliability provided by the applications. At this critical juncture, security research is growing. This paper examines the recent trends and forward thinking investigation in several fields of security, along with a brief history of previous accomplishments in the corresponding areas. Some open issues have been discussed for further investigation

    The control over personal data: True remedy or fairy tale ?

    Get PDF
    This research report undertakes an interdisciplinary review of the concept of "control" (i.e. the idea that people should have greater "control" over their data), proposing an analysis of this con-cept in the field of law and computer science. Despite the omnipresence of the notion of control in the EU policy documents, scholarly literature and in the press, the very meaning of this concept remains surprisingly vague and under-studied in the face of contemporary socio-technical environments and practices. Beyond the current fashionable rhetoric of empowerment of the data subject, this report attempts to reorient the scholarly debates towards a more comprehensive and refined understanding of the concept of control by questioning its legal and technical implications on data subject\^as agency
    corecore