148,561 research outputs found

    Exploring the Trust Gap: Dimensions and Predictors of Trust Among Labor and Management Representatives

    Get PDF
    Existing literature on interpersonal trust in work relationships has largely focused on trust as an independent variable. This study examined trust as a dependent variable by investigating its dimensions and predictors. Four dimensions of trust were hypothesized: open communication, informal agreement, task reliance, and surveillance. A survey measure of willingness to trust was developed. Confirmatory factor analysis using data from 305 management representatives and 293 labor representatives showed the convergent and discriminant validity of the measure. Fishbein and Ajzen\u27s theory of reasoned action served as the theoretical basis for a model of the predictors of trust. Regression analyses found that the past trustworthiness of the focal person and the attitude toward trusting the focal person were the most important predictors. Implications for research and practice are discussed

    Trust and corruption: escalating social practices?

    Get PDF
    Escalating social practices spread dynamically, as they take hold. They are selffulfilling and contagious. This article examines two central social practices, trust and corruption, which may be characterized as alternative economic lubricants. Corruption can be a considerable instrument of flexibility while trust may be an alternative to vigilance (or a collective regime of sanctions). Rational equilibrium explanations and psychological accounts of trust and corruption are rejected in favour of a model open to multiple feed-backs. Although there can be too much trust and too little corruption, and (unsurprisingly) too little trust and too much corruption, a state is unattainable in which these forces are in balance. Practices of trust alone can form stable equilibria, but it is claimed that such states are undesirable for economic and moral reasons. By contrast, practices of corruption are inherently unstable. Implications for strategies of control in organizational relations are drawn

    Trust in scientists on climate change and vaccines

    Get PDF
    On climate change and other topics, conservatives have taken positions at odds with a strong scientific consensus. Claims that this indicates a broad conservative distrust of science have been countered by assertions that while conservatives might oppose the scientific consensus on climate change or evolution, liberals oppose scientists on some other domains such as vaccines. Evidence for disproportionately liberal bias against science on vaccines has been largely anecdotal, however. Here, we test this proposition of opposite biases using 2014 survey data from Oregon and New Hampshire. Across vaccine as well as climate change questions on each of these two surveys, we find that Democrats are most likely to say they trust scientists for information, and Tea Party supporters are least likely, contradicting the proposition of opposite bias. Moreover, partisan divisions tend to widen with education. Theoretical explanations that have been offered for liberal trust or conservative distrust of science in other specific domains such as climate change or environmental protection fit less well with these results on vaccines. Given the much different content of climate change and vaccine issues, the common political pattern appears more consistent with hypotheses of broader ideological divisions on acceptance of science

    Local and Global Trust Based on the Concept of Promises

    Get PDF
    We use the notion of a promise to define local trust between agents possessing autonomous decision-making. An agent is trustworthy if it is expected that it will keep a promise. This definition satisfies most commonplace meanings of trust. Reputation is then an estimation of this expectation value that is passed on from agent to agent. Our definition distinguishes types of trust, for different behaviours, and decouples the concept of agent reliability from the behaviour on which the judgement is based. We show, however, that trust is fundamentally heuristic, as it provides insufficient information for agents to make a rational judgement. A global trustworthiness, or community trust can be defined by a proportional, self-consistent voting process, as a weighted eigenvector-centrality function of the promise theoretical graph

    Privacy, security, and trust issues in smart environments

    Get PDF
    Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    Online Computation with Untrusted Advice

    Full text link
    The advice model of online computation captures a setting in which the algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well-studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    Increased security through open source

    Get PDF
    In this paper we discuss the impact of open source on both the security and transparency of a software system. We focus on the more technical aspects of this issue, combining and extending arguments developed over the years. We stress that our discussion of the problem only applies to software for general purpose computing systems. For embedded systems, where the software usually cannot easily be patched or upgraded, different considerations may apply
    • 

    corecore