1,183 research outputs found

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    An adaptive trust based service quality monitoring mechanism for cloud computing

    Get PDF
    Cloud computing is the newest paradigm in distributed computing that delivers computing resources over the Internet as services. Due to the attractiveness of cloud computing, the market is currently flooded with many service providers. This has necessitated the customers to identify the right one meeting their requirements in terms of service quality. The existing monitoring of service quality has been limited only to quantification in cloud computing. On the other hand, the continuous improvement and distribution of service quality scores have been implemented in other distributed computing paradigms but not specifically for cloud computing. This research investigates the methods and proposes mechanisms for quantifying and ranking the service quality of service providers. The solution proposed in this thesis consists of three mechanisms, namely service quality modeling mechanism, adaptive trust computing mechanism and trust distribution mechanism for cloud computing. The Design Research Methodology (DRM) has been modified by adding phases, means and methods, and probable outcomes. This modified DRM is used throughout this study. The mechanisms were developed and tested gradually until the expected outcome has been achieved. A comprehensive set of experiments were carried out in a simulated environment to validate their effectiveness. The evaluation has been carried out by comparing their performance against the combined trust model and QoS trust model for cloud computing along with the adapted fuzzy theory based trust computing mechanism and super-agent based trust distribution mechanism, which were developed for other distributed systems. The results show that the mechanisms are faster and more stable than the existing solutions in terms of reaching the final trust scores on all three parameters tested. The results presented in this thesis are significant in terms of making cloud computing acceptable to users in verifying the performance of the service providers before making the selection

    Quantitative Measures of Regret and Trust in Human-Robot Collaboration Systems

    Get PDF
    Human-robot collaboration (HRC) systems integrate the strengths of both humans and robots to improve the joint system performance. In this thesis, we focus on social human-robot interaction (sHRI) factors and in particular regret and trust. Humans experience regret during decision-making under uncertainty when they feel that a better result could be obtained if chosen differently. A framework to quantitatively measure regret is proposed in this thesis. We embed quantitative regret analysis into Bayesian sequential decision-making (BSD) algorithms for HRC shared vision tasks in both domain search and assembly tasks. The BSD method has been used for robot decision-making tasks, which however is proved to be very different from human decision-making patterns. Instead, regret theory qualitatively models human\u27s rational decision-making behaviors under uncertainty. Moreover, it has been shown that joint performance of a team will improve if all members share the same decision-making logic. Trust plays a critical role in determining the level of a human\u27s acceptance and hence utilization of a robot. A dynamic network based trust model combing the time series trust model is first implemented in a multi-robot motion planning task with a human-in-the-loop. However, in this model, the trust estimates for each robot is independent, which fails to model the correlative trust in multi-robot collaboration. To address this issue, the above model is extended to interdependent multi-robot Dynamic Bayesian Networks

    Biometric Based Intrusion Detection System using Dempster-Shafer Theory for Mobile Ad hoc Network Security

    Get PDF
    In wireless mobile ad hoc network, mainly, two approaches are followed to protect the security such as prevention-based approaches and detection-based approaches. A Mobile Ad hoc Network (MANET) is a collection of autonomous wireless mobile nodes forming temporary network to interchange data (data packets) without using any fixed topology or centralized administration. In this dynamic network, each node changes its geographical position and acts as a router for forwarding packets to the other node. Current MANETs are basically vulnerable to different types of attacks. The multimodal biometric technology gives possible resolves for continuous user authentication and vulnerability in high security mobile ad hoc networks (MANETs). Dempster’s rule for combination gives a numerical method for combining multiple pieces of data from unreliable observers. This paper studies biometric authentication and intrusion detection system with data fusion using Dempster–Shafer theory in such MANETs. Multimodal biometric technologies are arrayed to work with intrusion detection to improve the limitations of unimodal biometric technique

    Intelligent Trust based Security Framework for Internet of Things

    Get PDF
    Trust models have recently been proposed for Internet of Things (IoT) applications as a significant system of protection against external threats. This approach to IoT risk management is viable, trustworthy, and secure. At present, the trust security mechanism for immersion applications has not been specified for IoT systems. Several unfamiliar participants or machines share their resources through distributed systems to carry out a job or provide a service. One can have access to tools, network routes, connections, power processing, and storage space. This puts users of the IoT at much greater risk of, for example, anonymity, data leakage, and other safety violations. Trust measurement for new nodes has become crucial for unknown peer threats to be mitigated. Trust must be evaluated in the application sense using acceptable metrics based on the functional properties of nodes. The multifaceted confidence parameterization cannot be clarified explicitly by current stable models. In most current models, loss of confidence is inadequately modeled. Esteem ratings are frequently mis-weighted when previous confidence is taken into account, increasing the impact of harmful recommendations.                In this manuscript, a systematic method called Relationship History along with cumulative trust value (Distributed confidence management scheme model) has been proposed to evaluate interactive peers trust worthiness in a specific context. It includes estimating confidence decline, gathering & weighing trust      parameters and calculating the cumulative trust value between nodes. Trust standards can rely on practical contextual resources, determining if a service provider is trustworthy or not and does it deliver effective service? The simulation results suggest that the proposed model outperforms other similar models in terms of security, routing and efficiency and further assesses its performance based on derived utility and trust precision, convergence, and longevity

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    A secure mechanism design-based and game theoretical model for MANETs

    Get PDF
    International audienceTo avoid the single point of failure for the certificate authority (CA) in MANET, a decentralized solution is proposed where nodes are grouped into different clusters. Each cluster should contain at least two confident nodes. One is known as CA and the another as register authority RA. The Dynamic Demilitarized Zone (DDMZ) is proposed as a solution for protecting the CA node against potential attacks. It is formed from one or more RA node. The problems of such a model are: (1) Clusters with one confident node, CA, cannot be created and thus clusters' sizes are increased which negatively affect clusters' services and stability. (2) Clusters with high density of RA can cause channel collision at the CA. (3) Clusters' lifetime are reduced since RA monitors are always launched (i.e., resource consumption). In this paper, we propose a model based on mechanism design that will allow clusters with single trusted node (CA) to be created. Our mechanism will motivate nodes that do not belong to the confident community to participate by giving them incentives in the form of trust, which can be used for cluster's services. To achieve this goal, a RA selection algorithm is proposed that selects nodes based on a predefined selection criteria function and location (i.e., using directional antenna). Such a model is known as moderate. Based on the security risk, more RA nodes must be added to formalize a robust DDMZ. Here, we consider the tradeoff between security and resource consumption by formulating the problem as a nonzero-sum noncooperative game between the CA and attacker. Finally, empirical results are provided to support our solutions

    Secure Routing Environment with Enhancing QoS in Mobile Ad-Hoc Networks

    Get PDF
    A mobile adhoc network is infrastructure-free and self configured network connected without wire. As it is infrastructure-free and no centralized control, such type of network are suitable only for conditional inter communication link. So initially maintaining Quality of Service and security aware routing is a difficult task. The main purpose of QoS aware routing is to find an optimal secure route from source to destination which will satisfy two or more QoS constrain. In this paper, we propose a net based multicasting routing scheme to discovery all possible secure path using Secure closest spot trust certification protocol (SCSTC) and the optimal link path is derived from Dolphin Echolocation algorithm (DEA). The numerical result and performance analysis clearly describe that our provided proposal routing protocol generates better packet delivery ratio, decreases packet delay reduces overhead in secured environment

    Using Spammers\u27 Computing Resources for Volunteer Computing

    Get PDF
    Spammers are continually looking to circumvent counter-measures seeking to slow them down. An immense amount of time and money is currently devoted to hiding spam, but not enough is devoted to effectively preventing it. One approach for preventing spam is to force the spammer\u27s machine to solve a computational problem of varying difficulty before granting access. The idea is that suspicious or problematic requests are given difficult problems to solve while legitimate requests are allowed through with minimal computation. Unfortunately, most systems that employ this model waste the computing resources being used, as they are directed towards solving cryptographic problems that provide no societal benefit. While systems such as reCAPTCHA and FoldIt have allowed users to contribute solutions to useful problems interactively, an analogous solution for non-interactive proof-of-work does not exist. Towards this end, this paper describes MetaCAPTCHA and reBOINC, an infrastructure for supporting useful proof-of-work that is integrated into a web spam throttling service. The infrastructure dynamically issues CAPTCHAs and proof-of-work puzzles while ensuring that malicious users solve challenging puzzles. Additionally, it provides a framework that enables the computational resources of spammers to be redirected towards meaningful research. To validate the efficacy of our approach, prototype implementations based on OpenCV and BOINC are described that demonstrate the ability to harvest spammer\u27s resources for beneficial purposes
    • …
    corecore