176,087 research outputs found

    A comparative study of structured and un-structured remote data access in distributed computing systems

    Get PDF
    Recently, the use of distributed computing systems has been growing rapidly due to the result of cheap and advanced microelectronic technology. In addition to the decrease in hardware costs, the tremendous development in machine to machine communication interfaces, especially in local area networking, also favours the use of distributed systems. Distributed systems often require remote access to data stored at different sites. Generally, two models of access to remote data storage exist: the un structured and structured models. In the former, data is simply stored as row of bytes, whereas in the latter, data is stored along with the associated access codes. The objective of this thesis is to compare these two models and hence determines the tradeoffs of each model. First of all, an extended review of the field of distributed data access is provided which addressing key issues such as the basic design principles of distributed computing systems, the notions of abstract data types, data inheritance, data type system and data persistence. Secondly, a distributed system is implemented using the persistent programming language PS-algol and the high level language C in conjunction with the remote procedure call facilities available in Unix(^1) 4.2 BSD operating system. This distributed system makes extensive use of Unix's software tools and hence it is called DCSUNIX for Distributed Computing System on UNIX. Thirdly, two specific applications which employ the implemented system will be given so that a comparison can be made between the two remote data access models mentioned above. Finally, the implemented system is compared with the criteria established earlier in the thesis. keywords: abstract data types, class, database management, data persistence, information hiding, inheritance, object oriented programming, programming languages, remote procedure calls, transparency, and type checking

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    A linguistic approach to concurrent, distributed, and adaptive programming across heterogeneous platforms

    Get PDF
    Two major trends in computing hardware during the last decade have been an increase in the number of processing cores found in individual computer hardware platforms and an ubiquity of distributed, heterogeneous systems. Together, these changes can improve not only the performance of a range of applications, but the types of applications that can be created. Despite the advances in hardware technology, advances in programming of such systems has not kept pace. Traditional concurrent programming has always been challenging, and is only set to be come more so as the level of hardware concurrency increases. The different hardware platforms which make up heterogeneous systems come with domain-specific programming models, which are not designed to interact, or take into account the different resource-constraints present across different hardware devices, motivating a need for runtime reconfiguration or adaptation. This dissertation investigates the actor model of computation as an appropriate abstraction to address the issues present in programming concurrent, distributed, and adaptive applications across different scales and types of computing hardware. Given the limitations of other approaches, this dissertation describes a new actor-based programming language (Ensemble) and its runtime to address these challenges. The goal of this language is to enable non-specialist programmers to take advantage of parallel, distributed, and adaptive programming without the programmer requiring in-depth knowledge of hardware architectures or software frameworks. There is also a description of the design and implementation of the runtime system which executes Ensemble applications across a range of heterogeneous platforms. To show the suitability of the actor-based abstraction in creating applications for such systems, the language and runtime were evaluated in terms of linguistic complexity and performance. These evaluations covered programming embedded, concurrent, distributed, and adaptable applications, as well as combinations thereof. The results show that the actor provides an objectively simple way to program such systems without sacrificing performance

    Video Forensics in Cloud Computing: The Challenges & Recommendations

    Get PDF
    Forensic analysis of large video surveillance datasets requires computationally demanding processing and significant storage space. The current standalone and often dedicated computing infrastructure used for the purpose is rather limited due to practical limits of hardware scalability and the associated cost. Recently Cloud Computing has emerged as a viable solution to computing resource limitations, taking full advantage of virtualisation capabilities and distributed computing technologies. Consequently the opportunities provided by cloud computing service to support the requirements of forensic video surveillance systems have been recently studied in literature. However such studies have been limited to very simple video analytic tasks carried out within a cloud based architecture. The requirements of a larger scale video forensic system are significantly more and demand an in-depth study. Especially there is a need to balance the benefits of cloud computing with the potential risks of security and privacy breaches of the video data. Understanding different legal issues involved in deploying video surveillance in cloud computing will help making the proposed security architecture affective against potential threats and hence lawful. In this work we conduct a literature review to understand the current regulations and guidelines behind establishing a trustworthy, cloud based video surveillance system. In particular we discuss the requirements of a legally acceptable video forensic system, study the current security and privacy challenges of cloud based computing systems and make recommendations for the design of a cloud based video forensic system

    Video forensics in cloud computing: the challenges & recommendations

    Get PDF
    Forensic analysis of large video surveillance datasets requires computationally demanding processing and significant storage space. The current standalone and often dedicated computing infrastructure used for the purpose is rather limited due to practical limits of hardware scalability and the associated cost. Recently Cloud Computing has emerged as a viable solution to computing resource limitations, taking full advantage of virtualisation capabilities and distributed computing technologies. Consequently the opportunities provided by cloud computing service to support the requirements of forensic video surveillance systems have been recently studied in literature. However such studies have been limited to very simple video analytic tasks carried out within a cloud based architecture. The requirements of a larger scale video forensic system are significantly more and demand an in-depth study. Especially there is a need to balance the benefits of cloud computing with the potential risks of security and privacy breaches of the video data. Understanding different legal issues involved in deploying video surveillance in cloud computing will help making the proposed security architecture affective against potential threats and hence lawful. In this work we conduct a literature review to understand the current regulations and guidelines behind establishing a trustworthy, cloud based video surveillance system. In particular we discuss the requirements of a legally acceptable video forensic system, study the current security and privacy challenges of cloud based computing systems and make recommendations for the design of a cloud based video forensic system

    Applications Resilience on Clouds

    Get PDF
    International audienceCloud computing infrastructures support system and network fault-tolerance. They transparently repair and prevent communication and software errors. They also allow duplication and migration of jobs and data to prevent hardware failures. However, only limited work has been done so far on application resilience, i.e., the ability to resume normal execution after errors and abnormal executions in distributed environments and clouds. This paper addresses open issues and solutions for application errors detection and management. It also overviews a testbed used to to design, deploy, execute, monitor, restart and resume distributed applications on cloud infrastructures in cases of failures

    Context Aware Adaptable Applications - A global approach

    Get PDF
    Actual applications (mostly component based) requirements cannot be expressed without a ubiquitous and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Applications are then qualified as context aware applications. This first part of this paper presents an overview of context and its management by application adaptation. This part starts by a definition and proposes a model for the context. It also presents various techniques to adapt applications to the context: from self-adaptation to supervised approached. The second part is an overview of architectures for adaptable applications. It focuses on platforms based solutions and shows information flows between application, platform and context. Finally it makes a synthesis proposition with a platform for adaptable context-aware applications called Kalimucho. Then we present implementations tools for software components and a dataflow models in order to implement the Kalimucho platform
    • …
    corecore