9,779 research outputs found
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
From Manifesta to Krypta: The Relevance of Categories for Trusting Others
In this paper we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (Manifesta), namely explicitly readable signals indicating internal features (Krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open Multi Agent Systems
Reasoning with Categories for Trusting Strangers: a Cognitive Architecture
A crucial issue for agents in open systems is the ability to filter out information sources in order to build an image of their counterparts, upon which a subjective evaluation of trust as a promoter of interactions can be assessed. While typical solutions discern relevant information sources by relying on previous experiences or reputational images, this work presents an alternative approach based on the cognitive ability to: (i) analyze heterogeneous information sources along different dimensions; (ii) ascribe qualities to unknown counterparts based on reasoning over abstract classes or categories; and, (iii) learn a series of emergent relationships between particular properties observable on other agents and their effective abilities to fulfill tasks. A computational architecture is presented allowing cognitive agents to dynamically assess trust based on a limited set of observable properties, namely explicitly readable signals (Manifesta) through which it is possible to infer hidden properties and capabilities (Krypta), which finally regulate agents' behavior in concrete work environments. Experimental evaluation discusses the effectiveness of trustor agents adopting different strategies to delegate tasks based on categorization
An objective based classification of aggregation techniques for wireless sensor networks
Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented
Flow-based reputation with uncertainty: Evidence-Based Subjective Logic
The concept of reputation is widely used as a measure of trustworthiness
based on ratings from members in a community. The adoption of reputation
systems, however, relies on their ability to capture the actual trustworthiness
of a target. Several reputation models for aggregating trust information have
been proposed in the literature. The choice of model has an impact on the
reliability of the aggregated trust information as well as on the procedure
used to compute reputations. Two prominent models are flow-based reputation
(e.g., EigenTrust, PageRank) and Subjective Logic based reputation. Flow-based
models provide an automated method to aggregate trust information, but they are
not able to express the level of uncertainty in the information. In contrast,
Subjective Logic extends probabilistic models with an explicit notion of
uncertainty, but the calculation of reputation depends on the structure of the
trust network and often requires information to be discarded. These are severe
drawbacks.
In this work, we observe that the `opinion discounting' operation in
Subjective Logic has a number of basic problems. We resolve these problems by
providing a new discounting operator that describes the flow of evidence from
one party to another. The adoption of our discounting rule results in a
consistent Subjective Logic algebra that is entirely based on the handling of
evidence. We show that the new algebra enables the construction of an automated
reputation assessment procedure for arbitrary trust networks, where the
calculation no longer depends on the structure of the network, and does not
need to throw away any information. Thus, we obtain the best of both worlds:
flow-based reputation and consistent handling of uncertainties
CARS – A Spatio-Temporal BDI Recommender System: Time, Space and Uncertainty
International audienceAgent-based recommender systems have been exploited in the last years to provide informative suggestions to users, showing the advantage of exploiting components like beliefs, goals and trust in the recommenda-tions' computation. However, many real-world scenarios, like the traffic one, require the additional feature of representing and reasoning about spatial and temporal knowledge, considering also their vague connotation. This paper tackles this challenge and introduces CARS, a spatio-temporal agent-based recommender system based on the Belief-Desire-Intention (BDI) architecture. Our approach extends the BDI model with spatial and temporal information to represent and reason about fuzzy beliefs and desires dynamics. An experimental evaluation about spatio-temporal reasoning in the traffic domain is carried out using the NetLogo platform, showing the improvements our recommender system introduces to support agents in achieving their goals
BPRS: Belief Propagation Based Iterative Recommender System
In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems
- …