936,131 research outputs found

    Training a personal alert system for research information recommendation

    Get PDF
    Information Systems, and in particular Current Research Information Systems (CRISs), are usually quite difficult to query when looking for specific information, due to the huge amounts of data they contain. To solve this problem, we propose to use a personal search agent that uses fuzzy and rough sets to inform the user about newly available information. Additionally, in order to automate the operation of our solution and to provide it with sufficient information, a document classification module is developed and tested. This module also generates fuzzy relations between research domains that are used by the agent during the mapping process

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    A MEASURE OF THE VALUE OF INFORMATION FOR THE COMPETITIVE FIRM UNDER PRICE UNCERTAINTY

    Get PDF
    This paper addresses the problem of measuring the value of information to an agent in an environment where the agent is risk averse and choices are base on the utility of income and personal beliefs about the likelihood of uncertain outcomesRisk and Uncertainty,

    Did the NSA and GCHQ Diminish Our Privacy? What the Control Account Should Say

    Get PDF
    A standard account of privacy says that it is essentially a kind of control over personal information. Many privacy scholars have argued against this claim by relying on so-called threatened loss cases. In these cases, personal information about an agent is easily available to another person, but not accessed. Critics contend that control accounts have the implausible implication that the privacy of the relevant agent is diminished in threatened loss cases. Recently, threatened loss cases have become important because Edward Snowden’s revelation of how the NSA and GCHQ collected Internet and mobile phone data presents us with a gigantic, real-life threatened loss case. In this paper, I will defend the control account of privacy against the argument that is based on threatened loss cases. I will do so by developing a new version of the control account that implies that the agents’ privacy is not diminished in threatened loss cases

    Merger Efficiency and Managerial Incentives

    Get PDF
    We consider a two-stage principal-agent model with limited liability in which a CEO is employed as agent to gather information about suitable merger targets and to manage the merged corporation in case of an acquisition. Our results show that the CEO systematically recommends targets with low synergies—even when targets with high synergies are available—to obtain high-powered incentives and, hence, a high personal income at the merger-management stage. We derive conditions under which shareholders prefer a self-commitment policy or a rent-reduction policy to deter the CEO from opportunistic recommendations

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information
    • 

    corecore