20,471 research outputs found
Privacy, security, and trust issues in smart environments
Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning
Recommended from our members
Generating citizen trust in e-government using a trust verification agent: A research note
Generating Citizen Trust in e-Government using a Trust Verification AgentThis is an eGISE network paper. It is motivated by a concern about the extent to which trust issues inhibit a citizenâs take-up of online public sector services or engagement with public decision and
policy making. A citizenâs decision to use online systems is influenced by their willingness to trust the environment and agency involved. This project addresses one aspect of individual âtrustâ decisions by
providing support for citizens trying to evaluate the implications of the security infrastructure provided by the agency. Based on studies of the way both groups (citizens and agencies) express their concerns and concepts in the security area, the project will develop a software tool â a trust
verification agent (TVA) - that can take an agencyâs security statements (or security audit) and infer how effectively this meets the security concerns of a particular citizen. This will enable citizens to state
their concerns and obtain an evaluation of the agencyâs provision in appropriate âcitizen friendlyâ language. Further, by employing rule-based expert systems techniques the TVA will also be able to explain its evaluation.Engineering and Physical Sciences Research Council, UK (grant GR/T27020/01
Recommended from our members
Generating citizen trust in e-government using a trust verification agent: A research note
Generating Citizen Trust in e-Government using a Trust Verification AgentThis is an eGISE network paper. It is motivated by a concern about the extent to which trust issues inhibit a citizenâs take-up of online public sector services or engagement with public decision and policy making. A citizenâs decision to use online systems is influenced by their willingness to trust the environment and agency involved. This project addresses one aspect of individual âtrustâ decisions by
providing support for citizens trying to evaluate the implications of the security infrastructure provided by the agency. Based on studies of the way both groups (citizens and agencies) express their concerns and concepts in the security area, the project will develop a software tool â a trust
verification agent (TVA) - that can take an agencyâs security statements (or security audit) and infer how effectively this meets the security concerns of a particular citizen. This will enable citizens to state
their concerns and obtain an evaluation of the agencyâs provision in appropriate âcitizen friendlyâ
language. Further, by employing rule-based expert systems techniques the TVA will also be able to explain its evaluation.Engineering and Physical Sciences Research Council-UK (grant GR/T27020/01
Online advertising: analysis of privacy threats and protection approaches
Online advertising, the pillar of the âfreeâ content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.Peer ReviewedPostprint (author's final draft
ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS
La pérdida de privacidad se estå convirtiendo en uno de los mayores problemas
en el mundo de la informĂĄtica. De hecho, la mayorĂa de los usuarios
de Internet (que hoy en dĂa alcanzan la cantidad de 2 billones de usuarios
en todo el mundo) estĂĄn preocupados por su privacidad. Estas preocupaciones
también se trasladan a las nuevas ramas de la informåtica que estån
emergiendo en los ultimos años. En concreto, en esta tesis nos centramos en
la privacidad en los Sistemas Multiagente. En estos sistemas, varios agentes
(que pueden ser inteligentes y/o autĂłnomos) interactĂșan para resolver problemas.
Estos agentes suelen encapsular informaciĂłn personal de los usuarios
a los que representan (nombres, preferencias, tarjetas de crédito, roles, etc.).
AdemĂĄs, estos agentes suelen intercambiar dicha informaciĂłn cuando interactĂșan entre ellos. Todo esto puede resultar en pĂ©rdida de privacidad para
los usuarios, y por tanto, provocar que los usuarios se muestren adversos a
utilizar estas tecnologĂas.
En esta tesis nos centramos en evitar la colecciĂłn y el procesado de informaciĂłn personal en Sistemas Multiagente. Para evitar la colecciĂłn de informaciĂłn, proponemos un modelo para que un agente sea capaz de decidir
qué atributos (de la información personal que tiene sobre el usuario al que
representa) revelar a otros agentes. AdemĂĄs, proporcionamos una infraestructura
de agentes segura, para que una vez que un agente decide revelar
un atributo a otro, sĂłlo este Ășltimo sea capaz de tener acceso a ese atributo,
evitando que terceras partes puedan acceder a dicho atributo. Para evitar el
procesado de informaciĂłn personal proponemos un modelo de gestiĂłn de las
identidades de los agentes. Este modelo permite a los agentes la utilizaciĂłn
de diferentes identidades para reducir el riesgo del procesado de información. Ademås, también describimos en esta tesis la implementación de dicho
modelo en una plataforma de agentes.Such Aparicio, JM. (2011). ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/13023Palanci
A collective intelligence approach for building student's trustworthiness profile in online learning
(c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Information and communication technologies have been widely adopted in most of educational institutions to support e-Learning through different learning methodologies such as computer supported collaborative learning, which has become one of the most influencing learning paradigms. In this context, e-Learning stakeholders, are increasingly demanding new requirements, among them, information security is considered as a critical factor involved in on-line collaborative processes. Information security determines the accurate development of learning activities, especially when a group of students carries out on-line assessment, which conducts to grades or certificates, in these cases, IS is an essential issue that has to be considered. To date, even most advances security technological solutions have drawbacks that impede the development of overall security e-Learning frameworks. For this reason, this paper suggests enhancing technological security models with functional approaches, namely, we propose a functional security model based on trustworthiness and collective intelligence. Both of these topics are closely related to on-line collaborative learning and on-line assessment models. Therefore, the main goal of this paper is to discover how security can be enhanced with trustworthiness in an on-line collaborative learning scenario through the study of the collective intelligence processes that occur on on-line assessment activities. To this end, a peer-to-peer public student's profile model, based on trustworthiness is proposed, and the main collective intelligence processes involved in the collaborative on-line assessments activities, are presented.Peer ReviewedPostprint (author's final draft
Security in Pervasive Computing: Current Status and Open Issues
Million of wireless device users are ever on the move, becoming more dependent on their PDAs, smart phones, and other handheld devices. With the advancement of pervasive computing, new and unique capabilities are available to aid mobile societies. The wireless nature of these devices has fostered a new era of mobility. Thousands of pervasive devices are able to arbitrarily join and leave a network, creating a nomadic environment known as a pervasive ad hoc network. However, mobile devices have vulnerabilities, and some are proving to be challenging. Security in pervasive computing is the most critical challenge. Security is needed to ensure exact and accurate confidentiality, integrity, authentication, and access control, to name a few. Security for mobile devices, though still in its infancy, has drawn the attention of various researchers. As pervasive devices become incorporated in our day-to-day lives, security will increasingly becoming a common concern for all users - - though for most it will be an afterthought, like many other computing functions. The usability and expansion of pervasive computing applications depends greatly on the security and reliability provided by the applications. At this critical juncture, security research is growing. This paper examines the recent trends and forward thinking investigation in several fields of security, along with a brief history of previous accomplishments in the corresponding areas. Some open issues have been discussed for further investigation
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent âdevicesâ, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew âcognitive devicesâ are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
- âŠ