150 research outputs found

    EFFICIENT DATA PROTECTION BY NOISING, MASKING, AND METERING

    Get PDF
    Protecting data secrecy is an important design goal of computing systems. Conventional techniques like access control mechanisms and cryptography are widely deployed, and yet security breaches and data leakages still occur. There are several challenges. First, sensitivity of the system data is not always easy to decide. Second, trustworthiness is not a constant property of the system components and users. Third, a system’s functional requirements can be at odds with its data protection requirements. In this dissertation, we show that efficient data protection can be achieved by noising, masking, or metering sensitive data. Specifically, three practical problems are addressed in the dissertation—storage side-channel attacks in Linux, server anonymity violations in web sessions, and data theft by malicious insiders. To mitigate storage side-channel attacks, we introduce a differentially private system, dpprocfs, which injects noise into side-channel vectors and also reestablishes invariants on the noised outputs. Our evaluations show that dpprocfs mitigates known storage side channels while preserving the utility of the proc filesystem for monitoring and diagnosis. To enforce server anonymity, we introduce a cloud service, PoPSiCl, which masks server identifiers, including DNS names and IP addresses, with personalized pseudonyms. PoPSiCl can defend against both passive and active network attackers with minimal impact to web-browsing performance. To prevent data theft from insiders, we introduce a system, Snowman, which restricts the user to access data only remotely and accurately meters the sensitive data output to the user by conducting taint analysis in a replica of the application execution without slowing the interactive user session.Doctor of Philosoph

    Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation

    Full text link
    A human computation system can be viewed as a distributed system in which the processors are humans, called workers. Such systems harness the cognitive power of a group of workers connected to the Internet to execute relatively simple tasks, whose solutions, once grouped, solve a problem that systems equipped with only machines could not solve satisfactorily. Examples of such systems are Amazon Mechanical Turk and the Zooniverse platform. A human computation application comprises a group of tasks, each of them can be performed by one worker. Tasks might have dependencies among each other. In this study, we propose a theoretical framework to analyze such type of application from a distributed systems point of view. Our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached: quality-of-service requirements, design and management strategies, and human aspects. By using this framework, we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications. In doing so, besides integrating and organizing what has been done in this direction, we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of, for example, task assignment, dependency management, and fault prevention and tolerance. We discuss how they are related to distributed systems and other areas of knowledge.Comment: 3 figures, 1 tabl

    Cross-Layer Cloud Performance Monitoring, Analysis and Recovery

    Get PDF
    The basic idea of Cloud computing is to offer software and hardware resources as services. These services are provided at different layers: Software (Software as a Service: SaaS), Platform (Platform as a Service: PaaS) and Infrastructure (Infrastructure as a Service: IaaS). In such a complex environment, performance issues are quite likely and rather the norm than the exception. Consequently, performance-related problems may frequently occur at all layers. Thus, it is necessary to monitor all Cloud layers and analyze their performance parameters to detect and rectify related problems. This thesis presents a novel cross-layer reactive performance monitoring approach for Cloud computing environments, based on the methodology of Complex Event Processing (CEP). The proposed approach is called CEP4Cloud. It analyzes monitored events to detect performance-related problems and performs actions to fix them. The proposal is based on the use of (1) a novel multi-layer monitoring approach, (2) a new cross-layer analysis approach and (3) a novel recovery approach. The proposed monitoring approach operates at all Cloud layers, while collecting related parameters. It makes use of existing monitoring tools and a new monitoring approach for Cloud services at the SaaS layer. The proposed SaaS monitoring approach is called AOP4CSM. It is based on aspect-oriented programming and monitors quality-of-service parameters of the SaaS layer in a non-invasive manner. AOP4CSM neither modifies the server implementation nor the client implementation. The defined cross-layer analysis approach is called D-CEP4CMA. It is based on the methodology of Complex Event Processing (CEP). Instead of having to manually specify continuous queries on monitored event streams, CEP queries are derived from analyzing the correlations between monitored metrics across multiple Cloud layers. The results of the correlation analysis allow us to reduce the number of monitored parameters and enable us to perform a root cause analysis to identify the causes of performance-related problems. The derived analysis rules are implemented as queries in a CEP engine. D-CEP4CMA is designed to dynamically switch between different centralized and distributed CEP architectures depending on the load/memory of the CEP machine and network traffic conditions in the observed Cloud environment. The proposed recovery approach is based on a novel action manager framework. It applies recovery actions at all Cloud layers. The novel action manager framework assigns a set of repair actions to each performance-related problem and checks the success of the applied action. The results of several experiments illustrate the merits of the reactive performance monitoring approach and its main components (i.e., monitoring, analysis and recovery). First, experimental results show the efficiency of AOP4CSM (very low overhead). Second, obtained results demonstrate the benefits of the analysis approach in terms of precision and recall compared to threshold-based methods. They also show the accuracy of the analysis approach in identifying the causes of performance-related problems. Furthermore, experiments illustrate the efficiency of D-CEP4CMA and its performance in terms of precision and recall compared to centralized and distributed CEP architectures. Moreover, experimental results indicate that the time needed to fix a performance-related problem is reasonably short. They also show that the CPU overhead of using CEP4Cloud is negligible. Finally, experimental results demonstrate the merits of CEP4Cloud in terms of speeding up the repair and reducing the number of triggered alarms compared to baseline methods

    Cross-layer multi-cloud real-time application QoS monitoring and benchmarking as-a-service framework

    Full text link
    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business criti-cal applications that leverage various cloud platforms. Such applications hosted on sin-gle/multiple cloud platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). The process of monitoring and benchmarking cloud applications is as yet a criti-cal issue to be further studied and addressed. Current monitoring and benchmarking approaches do not provide a holistic view of per-formance QoS for distributed applications cross cloud layers on multi-cloud environments. Furthermore, current monitoring frameworks are limited to monitoring tasks and do not in-corporate benchmarking abilities. In other words, there is no unified framework that com-bines monitoring and benchmarking functionalities. To gain the ability of both monitoring and benchmarking all under one framework will empower the cloud user to gain more in-depth control and awareness of cloud services. The Thesis identifies and discusses the major research dimensions and design issues relat-ed to developing techniques that can monitor and benchmark an application’s components cross-layers on multi-clouds. Furthermore, the thesis discusses to what extent such research dimensions and design issues are handled by current academic research papers as well as by the existing commercial monitoring tools. Moreover, the Thesis addresses an important research challenge of how to undertake cross-layer cloud monitoring and benchmarking in multi-cloud environments to provide es-sential information for effective cloud applications QoS management. It proposes, develops, implements and validates CLAMBS: Cross-Layer Multi-Cloud Application Monitoring and Benchmarking, as-a-Service Framework. The core contributions made by this thesis are the development of the CLAMBS framework and underlying monitoring and benchmarking tech-niques which are capable of: i) performing QoS monitoring of application components (e.g. ii database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g. Amazon EC2, and Microsoft Azure); and ii) giving visibility into the QoS of in-dividual application components, which is not supported by current monitoring and bench-marking frameworks. Experiments are conducted on real-world multi-cloud platforms to em-pirically evaluate the framework and the results validate that CLAMBS can effectively monitor and benchmark applications running cross-layers on multi-clouds. The thesis presents implementation and evaluation details of the proposed CLAMBS framework. It demonstrates the feasibility and scalability of the proposed framework in real-world environments by implementing a proof-of-concept prototype on multi-cloud platforms. Finally, it presents a model for analysing the communication overheads introduced by various components (e.g. agents and manager) of CLAMBS in multi cloud environments

    An OSINT Approach to Automated Asset Discovery and Monitoring

    Get PDF
    The main objective of this thesis is to improve the efficiency of security operations centersthrough the articulation of different publicly open sources of security related feeds. This ischallenging because of the different abstraction models of the feeds that need to be madecompatible, of the range of control values that each data source can have and that will impactthe security events, and of the scalability of computational and networking resources that arerequired to collect security events.Following the industry standards proposed by the literature (OSCP guide, PTES andOWASP), the detection of hosts and sub-domains using an articulation of several sources isregarded as the first interaction in an engagement. This first interaction often misses somesources that could allow the disclosure of more assets. This became important since networkshave scaled up to the cloud, where IP address range is not owned by the company, andimportant applications are often shared within the same IP, like the example of Virtual Hoststo host several application in the same server.We will focus on the first step of any engagement, the enumeration of the target network.Attackers often use several techniques to enumerate the target to discover vulnerable services.This enumeration could be improved by the addition of several other sources and techniquesthat are often left aside from the literature. Also, by creating an automated process it ispossible for security operation centers to discover these assets and map the applicationsin use to keep track of said vulnerabilities using OSINT techniques and publicly availablesolutions, before the attackers try to exploit the service. This gives a vision of the Internetfacing services often seen by attackers without querying the service directly evading thereforedetection. This research is in frame with the complete engagement process and should beintegrate in already built solutions, therefore the results should be able to connect to additionalapplications in order to reach forward in the engagement process.By addressing these challenges we expect to come in great aid of sysadmin and securityteams, helping them with the task of securing their assets and ensuring security cleanlinessof the enterprise resulting in a better policy compliance without ever connecting to the clienthosts
    corecore