3 research outputs found

    Enhancing online working-together relations

    Get PDF
    From the very early days of mankind, people have worked together to achieve greater outcomes to meet both short- and long-term needs and goals. We work together to solve problems, to achieve better results, and to achieve quicker results. We also learn from others and share new ideas. Moreover, organisations innovate by collaborating with other organisations. In recent decades, advances in mobile and other digital technologies have enabled the creation of many online applications to support groups to work together. The number of people and organisations currently adopting working-together applications are rapidly increasing. Not all of these applications have succeeded; after a while, users tend to stop using applications that do not help them to develop collaborative practices with their team members. Often “collaboration” is used to indicate “working-together activities”; but there are many types of “working-together relations”. To better understand the essential characteristics of a successful online application that effectively supports people to work together, I first undertook an inductive analysis of related literature. By combining the findings from the literature, I was able to clearly articulate the characteristics associated with four identified categories of working-together relations: networking, coordination, cooperation and collaboration. I also identified the essential activities that are performed in each working-together category, and the factors that enable successful working-together relations: trust, risk and rewards. These insights will assist in the design of successful online applications to support different categories of working-together relations. The first contribution of this research is to propose a new framework that can be used to clarify how effectively an existing application assists working-together relations. This framework is based on the analysis of the characteristics and processes identified in the working-together categories. The research’s second contribution is to establish the mechanisms that enhance working-together relations in each category. These mechanisms can be used to enhance existing or new online working-together tools. My research shows that user behaviour (or how the application is used) depends on how the mechanisms of trust, risk, and reward are implemented within the application. Understanding the mechanisms in place within the applications can lead to an understanding of how working-together relations evolve and how organisations can move from one working-together category to another

    Trust management for mobile computing platforms

    Get PDF
    Providing a trustworthy mobile computing platform is crucial for mobile communications, services and applications. In this dissertation, we study methodologies and mechanisms that can be used to provide a trustworthy mobile computing platform. We also present an autonomic trust management solution for a component software middleware platform targeting at an embedded device, such as a mobile phone. In the first part of the dissertation, we firstly overview the literature background of trust modeling and trust management. We propose research methodologies on the basis of a conceptual architecture of a trusted mobile environment. Further, we present a methodology to bridge disjoint trusted domains in mobile computing and communications into a trustworthy system. The second part of the dissertation contains a mechanism to sustain trust among computing platforms. The mechanism builds up a trust relationship based on the Root Trust (RT) module at a trustee platform and ensures trust sustainability according to pre-defined conditions. These conditions are approved at the time of trust establishment and enforced through the use of the pre-attested RT module until the intended purpose is fulfilled. Through applying this mechanism, we introduce a Trusted Collaboration Infrastructure (TCI) for peer-to-peer devices in order to establish trust collaboration among distributed peers. In addition, this mechanism contributes to a mobile Virtual Private Network (VPN) for trusted mobile enterprise networking. The third part of the dissertation presents an autonomic trust management solution that can manage trust adaptively in a middleware component software platform. We develop a formal trust model to specify, evaluate, set up and ensure trust relationships that exist among system entities. We further present a trust management architecture that supports the implementation of the above model and adopts a number of algorithms for autonomic trust management at system runtime. In particular, special control modes can be applied into the platform to ensure trustworthiness. We develop a methodology for trust control mode prediction and selection on the basis of an adaptive trust control model in order to support autonomic trust management.reviewe

    Trust Evaluation in the IoT Environment

    Get PDF
    Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process
    corecore