6 research outputs found

    Exploring Trust in Online Ride-sharing Platform in China: A Perspective of Time and Location

    Get PDF
    Trust is a key issue to be considered deliberately in the online ride-sharing platform to reduce risk and ensure transactions. In this paper, trust-in-platform is explored from these two perspectives to fill the research gaps. A ride-sharing platform in China was investigated. Results show that trust-in-platform in economically developing districts is slightly higher than that in economically developed districts. At the same time, trust-in-platform level differs in time, trust-in-platform levels are obviously lower between 19’o clock and 23’o clock. Moreover, machine learning is employed to predict the relationships between time/location and trust-in-platform. The result is that recall is 78.3%, precision is 57.3%, and F1 is 66.2%. The result shows trust-in-platform has an obvious correlation with time and location, thus further consolidates the findings. This study contributes to the existing knowledge on trust in the ride-sharing platforms and has practical implications for platform operators

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability

    Get PDF
    Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public's trust in AVs. Many factors can influence people's trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people's dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his or her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously

    A Generic Trust Framework For Large-Scale Open Systems Using Machine Learning

    No full text
    In many large-scale distributed systems and on the Web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable interaction environment. A traditional approach to reason about the risk of a transaction is to determine if the involved agent is trustworthy on the basis of its behavior history. As a departure from such traditional trust models, we propose a generic, trust framework based on machine learning where an agent uses its own previous transactions (with other agents) to build a personal knowledge base. This is used to assess the trustworthiness of a transaction on the basis of the associated features, particularly using the features that help discern successful transactions from unsuccessful ones. These features are handled by applying appropriate machine learning algorithms to extract the relationships between the potential transaction and the previous ones. Experiments based on real data sets show that our approach is more accurate than other trust mechanisms, especially when the information about past behavior of the specific agent is rare, incomplete, or inaccurate

    Trust assessment in the context of unrepresentative information

    Get PDF
    Trust and reputation algorithms are social methods, complementary to security protocols, that guide agents in multi-agent systems (MAS) in identifying trustworthy partners to communicate with. Agents need to interact to complete tasks, which requires delegating to an agent who has the time, resources or information to achieve it. Existing trust and reputation assessment methods can be accurate when they are learning from representative information, however, representative information rarely exists for all agents at all times. Improving trust mechanisms can benefit many open and distributed multi-agent applications. For example, distributing subtasks to trustworthy agents in pervasive computing or choosing who to share safe and high quality files with in a peer-to-peer network. Trust and reputation algorithms use the outcomes from past interaction experiences with agents to assess their behaviour. Stereotype models supplement trust and reputation methods when there is a lack of direct interaction experiences by inferring the target will behave the same as agents who are observably similar. These mechanisms can be effective in MAS where behaviours and agents do not change, or change in a simplistic way, for example, if agents changed their behaviour at the same rate. In real-world networks, agents experience fluctuations in their location, resources, knowledge, availability, time and priorities. Existing work does not account for the resulting dynamic dynamic populations and dynamic agent behaviours. Additionally, trust, reputation and stereotype models encourage repeat interactions with the same subset of agents which increase the uncertainty about the behaviour of the rest of the agent population. In the long term, having a biased view of the population hinders the discovery of new and better interaction partners. The diversity of agents and environments across MAS means that rigid approaches of maintaining and using data keep outdated information in some situations and not enough data in others. A logical improvement is for agents to manage information flexibly and adapt to their situation. In this thesis we present the following contributions. We propose a method to improve partner selection by making agents aware of a lack of diversity in their own knowledge and how to then make alternative behavioural assessments. We present methods for detecting dynamic behaviour in groups of agents, and give agents the statistical tools to decide which data are relevant. We introduce a data-free stereotype method to be used when there are no representative data for a data-driven behaviour assessment. Finally, we consider how agents can summarise agent behaviours to learn and exploit in depth behavioural patterns. The work presented in this thesis is evaluated in a synthetic environment designed to mimic characteristics of real-world networks and are comparable to evaluation environments from prominent trust and stereotype literature. The results show our work improves agents’ average reward from interactions by selecting better partners. We show that the efficacy of our work is most noticeable in environments where agents have sparse data, because it improve agents’ trust assessments under uncertainty
    corecore