14,034 research outputs found

    Enhancing Trust Management in Cloud Environment

    Get PDF
    AbstractTrust management has been identified as vital component for establishing and maintaining successful relational exchanges between e-commerce trading partners in cloud environment. In this highly competitive and distributed service environment, the assurances are insufficient for the consumers to identify the dependable and trustworthy Cloud providers. Due to these limitations, potential consumers are not sure whether they can trust the Cloud providers in offering dependable services. In this paper, we propose a multi-faceted trust management system architecture for cloud computing marketplaces, to support customers in identifying trustworthy cloud providers. This paper presents the important threats to a trust system and proposed a method for tackling these threats. It described the desired feature of a trust management system. It security components to determine the trustworthiness of e- commerce participants to helps online customers to decide whether or not to proceed with a transaction. Based on this framework, we proposed an approach for filtering out malicious feedbacks and a trust metric to evaluate the trustworthiness of service provider. Results of various simulation experiments show that the proposed multi-attribute trust management system can be highly effective in identifying risky transaction in electronic market places

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Adaptive service discovery on service-oriented and spontaneous sensor systems

    Get PDF
    Service-oriented architecture, Spontaneous networks, Self-organisation, Self-configuration, Sensor systems, Social patternsNatural and man-made disasters can significantly impact both people and environments. Enhanced effect can be achieved through dynamic networking of people, systems and procedures and seamless integration of them to fulfil mission objectives with service-oriented sensor systems. However, the benefits of integration of services will not be realised unless we have a dependable method to discover all required services in dynamic environments. In this paper, we propose an Adaptive and Efficient Peer-to-peer Search (AEPS) approach for dependable service integration on service-oriented architecture based on a number of social behaviour patterns. In the AEPS network, the networked nodes can autonomously support and co-operate with each other in a peer-to-peer (P2P) manner to quickly discover and self-configure any services available on the disaster area and deliver a real-time capability by self-organising themselves in spontaneous groups to provide higher flexibility and adaptability for disaster monitoring and relief

    Replica determinism and flexible scheduling in hard real-time dependable systems

    Get PDF
    Fault-tolerant real-time systems are typically based on active replication where replicated entities are required to deliver their outputs in an identical order within a given time interval. Distributed scheduling of replicated tasks, however, violates this requirement if on-line scheduling, preemptive scheduling, or scheduling of dissimilar replicated task sets is employed. This problem of inconsistent task outputs has been solved previously by coordinating the decisions of the local schedulers such that replicated tasks are executed in an identical order. Global coordination results either in an extremely high communication effort to agree on each schedule decision or in an overly restrictive execution model where on-line scheduling, arbitrary preemptions, and nonidentically replicated task sets are not allowed. To overcome these restrictions, a new method, called timed messages, is introduced. Timed messages guarantee deterministic operation by presenting consistent message versions to the replicated tasks. This approach is based on simulated common knowledge and a sparse time base. Timed messages are very effective since they neither require communication between the local scheduler nor do they restrict usage of on-line flexible scheduling, preemptions and nonidentically replicated task sets

    Are You Still There? - A Lightweight Algorithm to Monitor Node Presence in Self-Configuring Networks

    Get PDF
    This paper is concerned with the analysis and redesign of a distributed algorithm to monitor the availability of nodes in self-configuring networks. The simple scheme to regularly probe a node ¿ "are you still there?" ¿ may easily lead to over- or underloading. The essence of the algorithm is therefore to automatically adapt the probing frequency. We show that a self-adaptive scheme to control the probe load, originally proposed as an extension to the UPnPTM (Universal Plug and Play) standard, leads to an unfair treatment of nodes: some nodes probe fast while others almost starve. An alternative distributed algorithm is proposed that overcomes this problem and that tolerates highly dynamic network topology changes. The algorithm is very simple and can be implemented on large networks of small computing devices such as mobile phones, PDAs, and so on

    On-Line Dependability Enhancement of Multiprocessor SoCs by Resource Management

    Get PDF
    This paper describes a new approach towards dependable design of homogeneous multi-processor SoCs in an example satellite-navigation application. First, the NoC dependability is functionally verified via embedded software. Then the Xentium processor tiles are periodically verified via on-line self-testing techniques, by using a new IIP Dependability Manager. Based on the Dependability Manager results, faulty tiles are electronically excluded and replaced by fault-free spare tiles via on-line resource management. This integrated approach enables fast electronic fault detection/diagnosis and repair, and hence a high system availability. The dependability application runs in parallel with the actual application, resulting in a very dependable system. All parts have been verified by simulation

    An automated wrapper-based approach to the design of dependable software

    Get PDF
    The design of dependable software systems invariably comprises two main activities: (i) the design of dependability mechanisms, and (ii) the location of dependability mechanisms. It has been shown that these activities are intrinsically difficult. In this paper we propose an automated wrapper-based methodology to circumvent the problems associated with the design and location of dependability mechanisms. To achieve this we replicate important variables so that they can be used as part of standard, efficient dependability mechanisms. These well-understood mechanisms are then deployed in all relevant locations. To validate the proposed methodology we apply it to three complex software systems, evaluating the dependability enhancement and execution overhead in each case. The results generated demonstrate that the system failure rate of a wrapped software system can be several orders of magnitude lower than that of an unwrapped equivalent

    Cloud Security : A Review of Recent Threats and Solution Models

    Get PDF
    The most significant barrier to the wide adoption of cloud services has been attributed to perceived cloud insecurity (Smitha, Anna and Dan, 2012). In an attempt to review this subject, this paper will explore some of the major security threats to the cloud and the security models employed in tackling them. Access control violations, message integrity violations, data leakages, inability to guarantee complete data deletion, code injection, malwares and lack of expertise in cloud technology rank the major threats. The European Union invested €3m in City University London to research into the certification of Cloud security services. This and more recent developments are significant in addressing increasing public concerns regarding the confidentiality, integrity and privacy of data held in cloud environments. Some of the current cloud security models adopted in addressing cloud security threats were – Encryption of all data at storage and during transmission. The Cisco IronPort S-Series web security appliance was among security solutions to solve cloud access control issues. 2-factor Authentication with RSA SecurID and close monitoring appeared to be the most popular solutions to authentication and access control issues in the cloud. Database Active Monitoring, File Active Monitoring, URL Filters and Data Loss Prevention were solutions for detecting and preventing unauthorised data migration into and within clouds. There is yet no guarantee for a complete deletion of data by cloud providers on client requests however; FADE may be a solution (Yang et al., 2012)

    Proactive cloud management for highly heterogeneous multi-cloud infrastructures

    Get PDF
    Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework
    corecore