30 research outputs found

    Advanced OS deployment system

    Get PDF
    The main project’s objective is to design and build an OS deployment system taking advantage of the Linux OS and the Open Source community developments. This means to use existing technologies that modularize the system. With this philosophy in mind, the number of developed code lines within the project is keeping as small as possible. As REMBO, the OS deployment system to develop has to be transparent to the user. This means a system with a friendly user interface and no technological knowledge needed to manage it.Personal Computers are a fundamental tool in many disciplines of study and work. In many cases they are offered to users inside an open room to be used freely for their needs. A college computer laboratory is just one limit example of this situation. Students of different courses, with different needs, are sharing the same infrastructure. Technical managers know well the needs of maintenance of shared computer labs. Many people entering and using the computers represents a lot of small hardware reparations, resulting in a heterogeneous environment due to reparations and component substitutions. Then the real complexity arises: software management. In addition to the initial requirements of software to install on computers, the heterogeneous hardware makes exponential the number of combinations to maintain. The requirements are just a remote server connected to a network to the PC laboratory. This project, OS deployment system, is a modified way to use Linux thin client to restore images, using kexec to boot an OS without reboot. The state of the art on similar tools is the Rembo system, recently bought by IBM, added to the Tivoli Provisioning Manager suite. This final thesis presents how the OS deployoment system has improved to overcome PC labs management. The OS deployment system is a free software solution that permits: • The end-user to restore, interactively and easely, operating systems images on demand. - Multiple OS management. - Improve maintenance's time, autorestoring lab's computers to the initial configuration simultaneously. - An empty PC can be filled up with an operating system and software via network. - Fast image restoring

    Assessing the evidential value of artefacts recovered from the cloud

    Get PDF
    Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP

    Advanced OS deployment system

    Get PDF
    The main project’s objective is to design and build an OS deployment system taking advantage of the Linux OS and the Open Source community developments. This means to use existing technologies that modularize the system. With this philosophy in mind, the number of developed code lines within the project is keeping as small as possible. As REMBO, the OS deployment system to develop has to be transparent to the user. This means a system with a friendly user interface and no technological knowledge needed to manage it.Personal Computers are a fundamental tool in many disciplines of study and work. In many cases they are offered to users inside an open room to be used freely for their needs. A college computer laboratory is just one limit example of this situation. Students of different courses, with different needs, are sharing the same infrastructure. Technical managers know well the needs of maintenance of shared computer labs. Many people entering and using the computers represents a lot of small hardware reparations, resulting in a heterogeneous environment due to reparations and component substitutions. Then the real complexity arises: software management. In addition to the initial requirements of software to install on computers, the heterogeneous hardware makes exponential the number of combinations to maintain. The requirements are just a remote server connected to a network to the PC laboratory. This project, OS deployment system, is a modified way to use Linux thin client to restore images, using kexec to boot an OS without reboot. The state of the art on similar tools is the Rembo system, recently bought by IBM, added to the Tivoli Provisioning Manager suite. This final thesis presents how the OS deployoment system has improved to overcome PC labs management. The OS deployment system is a free software solution that permits: • The end-user to restore, interactively and easely, operating systems images on demand. - Multiple OS management. - Improve maintenance's time, autorestoring lab's computers to the initial configuration simultaneously. - An empty PC can be filled up with an operating system and software via network. - Fast image restoring

    Analyzing Metadata Performance in Distributed File Systems

    Get PDF
    Distributed file systems are important building blocks in modern computing environments. The challenge of increasing I/O bandwidth to files has been largely resolved by the use of parallel file systems and sufficient hardware. However, determining the best means by which to manage large amounts of metadata, which contains information about files and directories stored in a distributed file system, has proved a more difficult challenge. The objective of this thesis is to analyze the role of metadata and present past and current implementations and access semantics. Understanding the development of the current file system interfaces and functionality is a key to understanding their performance limitations. Based on this analysis, a distributed metadata benchmark termed DMetabench is presented. DMetabench significantly improves on existing benchmarks and allows stress on metadata operations in a distributed file system in a parallelized manner. Both intranode and inter-node parallelity, current trends in computer architecture, can be explicitly tested with DMetabench. This is due to the fact that a distributed file system can have different semantics inside a client node rather than semantics between multiple nodes. As measurements in larger distributed environments may exhibit performance artifacts difficult to explain by reference to average numbers, DMetabench uses a time-logging technique to record time-related changes in the performance of metadata operations and also protocols additional details of the runtime environment for post-benchmark analysis. Using the large production file systems at the Leibniz Supercomputing Center (LRZ) in Munich, the functionality of DMetabench is evaluated by means of measurements on different distributed file systems. The results not only demonstrate the effectiveness of the methods proposed but also provide unique insight into the current state of metadata performance in modern file systems

    A Study for Scalable Directory in Parallel File Systems

    Get PDF
    One of the challenges that the design of parallel file system for HPC(High Performance Computing) has to face today is maintaining the scalability to handle the I/O generated by parallel applications that involve accessing directories containing a large number of entries and performing hundreds of thousands of operations per second. Currently, highly concurrent access to large directories is poorly supported in parallel file systems. As a result, it is important to build a scalable directory service for parallel file systems to support efficient concurrent access to larger directories. In this thesis we demonstrate a scalable directory service designed for parallel file systems(specifically for PVFS) that can achieve high throughput and scalability while minimizing bottlenecks and synchronization overheads. We describe important concepts and goals in scalable directory service design and its implementation in the parallel file system simulator--HECIOS. We also explore the simulation model of MPI programs and the PVFS file system in HECIOS, including the method to verify and validate it. Finally, we test our scalable directory service on HECIOS and analyze the performance and scalability based on the results. In summary, we demonstrate that our scalable directory service can effectively handle highly concurrent access to large directories in parallel file systems. We are also able to show that our scalable directory service scales well with the number of I/O nodes in the cluster

    Building regulatory compliant storage systems

    Full text link
    In the past decade, informational records have become entirely digital. These include financial statements, health care records, student records, private consumer information and other sensitive data. Because of the delicate nature of the data these records contain, Congress and the courts have begun to recognize the importance of properly storing and securing electronic records. Examples of legislation in-clude the Health Insurance Portability and Accountabilit

    The development of an open-source forensics platform

    Get PDF
    The rate at which technology evolves by far outpaces the rate at which methods are developed to prevent and prosecute digital crime. This unfortunate situation may potentially allow computer criminals to commit crimes using technologies for which no proper forensic investigative technique currently exists. Such a scenario would ultimately allow criminals to go free due to the lack of evidence to prove their guilt. A solution to this problem would be for law enforcement agencies and governments to invest in the research and development of forensic technologies in an attempt to keep pace with the development of digital technologies. Such an investment could potentially allow new forensic techniques to be developed and released more frequently, thus matching the appearance of new computing devices on the market. A key element in improving the situation is to produce more research results, utilizing less resources, and by performing research more efficiently. This can be achieved by improving the process used to conduct forensic research. One of the problem areas in research and development is the development of prototypes to prove a concept or to test a hypothesis. An in-depth understanding of the extremely technical aspects of operating systems, such as file system structures and memory management, is required to allow forensic researchers to develop prototypes to prove their theories and techniques. The development of such prototypes is an extremely challenging task. It is complicated by the presence of minute details that, if ignored, may have a negative impact on the accuracy of results produced. If some of the complexities experienced in the development of prototypes could simply be removed from the equation, researchers may be able to produce more and better results with less effort, and thus ultimately speed up the forensic research process. This dissertation describes the development of a platform that facilitates the rapid development of forensic prototypes, thus allowing researchers to produce such prototypes utilizing less time and fewer resources. The purpose of the platform is to provide a set of rich features which are likely to be required by developers performing research prototyping. The proposed platform contributes to the development of prototypes using fewer resources and at a faster pace. The development of the platform, as well as various considerations that helped to shape its architecture and design, are the focus points of this dissertation. Topics such as digital forensic investigations, open-source software development, and the development of the proposed forensic platform are discussed. Another purpose of this dissertation is to serve as a proof-of-concept for the developed platform. The development of a selection of forensics prototypes, as well as the results obtained, are also discussed. CopyrightDissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte
    corecore