936,131 research outputs found
Training a personal alert system for research information recommendation
Information Systems, and in particular Current Research Information Systems (CRISs), are usually quite difficult to query when looking for specific information, due to the huge amounts of data they contain. To solve this problem, we propose to use a personal search agent that uses fuzzy and rough sets to inform the user about newly available information. Additionally, in order to automate the operation of our solution and to provide it with sufficient information, a document classification module is developed and tested. This module also generates fuzzy relations between research domains that are used by the agent during the mapping process
Fuzzy argumentation for trust
In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions
A MEASURE OF THE VALUE OF INFORMATION FOR THE COMPETITIVE FIRM UNDER PRICE UNCERTAINTY
This paper addresses the problem of measuring the value of information to an agent in an environment where the agent is risk averse and choices are base on the utility of income and personal beliefs about the likelihood of uncertain outcomesRisk and Uncertainty,
Did the NSA and GCHQ Diminish Our Privacy? What the Control Account Should Say
A standard account of privacy says that it is essentially a kind of control over personal information. Many privacy scholars have argued against this claim by relying on so-called threatened loss cases. In these cases, personal information about an agent is easily available to another person, but not accessed. Critics contend that control accounts have the implausible implication that the privacy of the relevant agent is diminished in threatened loss cases. Recently, threatened loss cases have become important because Edward Snowdenâs revelation of how the NSA and GCHQ collected Internet and mobile phone data presents us with a gigantic, real-life threatened loss case. In this paper, I will defend the control account of privacy against the argument that is based on threatened loss cases. I will do so by developing a new version of the control account that implies that the agentsâ privacy is not diminished in threatened loss cases
Merger Efficiency and Managerial Incentives
We consider a two-stage principal-agent model with limited liability in which a CEO is employed as agent to gather information about suitable merger targets and to manage the merged corporation in case of an acquisition. Our results show that the CEO systematically recommends targets with low synergiesâeven when targets with high synergies are availableâto obtain high-powered incentives and, hence, a high personal income at the merger-management stage. We derive conditions under which shareholders prefer a self-commitment policy or a rent-reduction policy to deter the CEO from opportunistic recommendations
Trust beyond reputation: A computational trust model based on stereotypes
Models of computational trust support users in taking decisions. They are
commonly used to guide users' judgements in online auction sites; or to
determine quality of contributions in Web 2.0 sites. However, most existing
systems require historical information about the past behavior of the specific
agent being judged. In contrast, in real life, to anticipate and to predict a
stranger's actions in absence of the knowledge of such behavioral history, we
often use our "instinct"- essentially stereotypes developed from our past
interactions with other "similar" persons. In this paper, we propose
StereoTrust, a computational trust model inspired by stereotypes as used in
real-life. A stereotype contains certain features of agents and an expected
outcome of the transaction. When facing a stranger, an agent derives its trust
by aggregating stereotypes matching the stranger's profile. Since stereotypes
are formed locally, recommendations stem from the trustor's own personal
experiences and perspective. Historical behavioral information, when available,
can be used to refine the analysis. According to our experiments using
Epinions.com dataset, StereoTrust compares favorably with existing trust models
that use different kinds of information and more complete historical
information
Recommended from our members
Consumersâ self-disclosure decisions and concerns : the effects of social exclusion and agent anthropomorphism
Consumer data and privacy is becoming an increasingly important topic in marketing, as the collection and use of consumersâ personal information and instances of data breach are both on the rise. At the core of these recent shifts in the consumer data and privacy landscape is consumersâ concern with sharing their personal information. Past research on consumer privacy has focused on when and why consumersâ concerns are heightened and why people still provide their personal information despite the concerns. This dissertation extends the literature on consumer self-disclosure and privacy concerns and explores novel psychological and situational factors that influence consumersâ decision to disclose and concern with sharing their personal information to brands and marketers. In Essay 1, I focused on the influence of individual and situational differences â namely, the feeling of social exclusion â and examined at how experiencing social exclusion can increase consumersâ self-disclosure intentions toward brands. Specifically, I proposed that consumers will be more willing to share their information with a brand when they experience social exclusion, driven by their desire to forge social connections with the brand. Through five studies, I tested and confirmed these hypotheses and also demonstrated two boundary conditions. In Essay 2, I investigated how anthropomorphism of products and brands â a marketer-controlled variable â influences consumersâ concerns with sharing their personal information when there are threats to privacy in the environment. Specifically, I proposed that consumersâ concerns with information collection by agents (i.e., products or brands) would be influenced by the level of privacy threats in the environment and the anthropomorphic nature of the agent, and that the effects would be driven by the perception of control over the agent. I argued that, when threats to privacy are high (vs. low), individualsâ concern with sharing their data will increase for a non-anthropomorphic agent, but such effect will be attenuated for an anthropomorphic agent collecting the information. Furthermore, I expected that the difference in the perceived control over the agent would account for these effects. I tested and partially confirmed these hypotheses through five studiesMarketin
- âŠ