143 research outputs found

    Lecture - CSCI 275: Linux Systems Administration and Security

    Get PDF
    Lecture for CSCI 275: Linux Systems Administration and Securit

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 2

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Monitoring and Failure Recovery of Cloud-Managed Digital Signage

    Get PDF
    Digitaal signage kasutatakse laialdaselt erinevates valdkondades, nagu näiteks transpordisüsteemid, turustusvõimalused, meelelahutus ja teised, et kuvada teavet piltide, videote ja teksti kujul. Nende ressursside usaldusväärsus, vajalike teenuste kättesaadavus ja turvameetmed on selliste süsteemide vastuvõtmisel võtmeroll. Digitaalse märgistussüsteemi tõhus haldamine on teenusepakkujatele keeruline ülesanne. Selle süsteemi rikkeid võib põhjustada mitmeid põhjuseid, nagu näiteks vigased kuvarid, võrgu-, riist- või tarkvaraprobleemid, mis on üsna korduvad. Traditsiooniline protsess sellistest ebaõnnestumistest taastumisel hõlmab sageli tüütuid ja tülikaid diagnoose. Paljudel juhtudel peavad tehnikud kohale füüsiliselt külastama, suurendades seeläbi hoolduskulusid ja taastumisaega.Selles väites pakume lahendust, mis jälgib, diagnoosib ja taandub tuntud tõrgetest, ühendades kuvarid pilvega. Pilvepõhine kaug- ja autonoomne server konfigureerib kaugseadete sisu ja uuendab neid dünaamiliselt. Iga kuva jälgib jooksvat protsessi ja saadab trace’i, logib süstemisse perioodiliselt. Negatiivide puhul analüüsitakse neid serverisse salvestatud logisid, mis optimaalselt kasutavad kohandatud logijuhtimismoodulit. Lisaks näitavad ekraanid ebaõnnestumistega toimetulemiseks enesetäitmise protseduure, kui nad ei suuda pilvega ühendust luua. Kavandatud lahendus viiakse läbi Linuxi süsteemis ja seda hinnatakse serveri kasutuselevõtuga Amazon Web Service (AWS) pilves. Peamisteks tulemusteks on meetodite kogum, mis võimaldavad kaugjuhtimisega kuvariprobleemide lahendamist.Digital signage is widely used in various fields such as transport systems, trading outlets, entertainment, and others, to display information in the form of images, videos, and text. The reliability of these resources, availability of required services and security measures play a key role in the adoption of such systems. Efficient management of the digital signage system is a challenging task to the service providers. There could be many reasons that lead to the malfunctioning of this system such as faulty displays, network, hardware or software failures that are quite repetitive. The traditional process of recovering from such failures often involves tedious and cumbersome diagnosis. In many cases, technicians need to physically visit the site, thereby increasing the maintenance costs and the recovery time. In this thesis, we propose a solution that monitors, diagnoses and recovers from known failures by connecting the displays to a cloud. A cloud-based remote and autonomous server configures the content of remote displays and updates them dynamically. Each display tracks the running process and sends the trace and system logs to the server periodically. These logs, stored at the server optimally using a customized log management module, are analysed for failures. In addition, the displays incorporate self-recovery procedures to deal with failures, when they are unable to create connection to the cloud. The proposed solution is implemented on a Linux system and evaluated by deploying the server on the Amazon Web Service (AWS) cloud. The main result of the thesis is a collection of techniques for resolving the display system failures remotely

    Automated Database Refresh in Very Large and Highly Replicated Environments

    Get PDF
    Refreshing non-production database environments is a fundamental activity. A non-productive environment must closely and approximately be related to the productive system and be populated with accurate, consistent data so that the changes before moving into the production system can be tested more effectively. Also if the development system has more related scenario as that of a live system then programming in-capabilities can be minimized. These scenarios add more pressure to get the system refreshed from the production system frequently. Also many organizations need a proven and performant solution to creating or moving data into their nonproduction environments that will neither break business rules, nor expose confidential information to non-privileged contractors and employees. But the academic literature on refreshing non-production environments is weak, restricted largely to instruction steps. To correct this situation, this study examines ways to refresh the development, QA, test or stage environments with production quality data, while maintaining the original structures. Using this method, developer\u27s and tester\u27s releases being promoted to the production environment are not impacted after a refresh. The study includes the design, development and testing of a system which semi-automatically backs up (saves a copy of) the current database structures, then takes a clone of the production database from the reporting or Oracle Recovery Manager servers, and then reapplies the structures and obfuscates confidential data. The study used an Oracle Real Application Cluster (RAC) environment for refresh. The findings identify methodologies to perform the refresh of non-production environments in a timely manager without exposing confidential data, and without over-writing the current structures in the database being refreshed. They also identify areas of significant savings in terms of time and money that can be made by keeping the structures for developers with freshened data

    Group sharing and random access in cryptographic storage file systems

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Vita.Includes bibliographical references (p. 79-83).by Kevin E. Fu.M.Eng

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Using Infrastructure as Code for Web Application Disaster Recovery

    Get PDF
    Legacy, industry established disaster recovery approaches are known for impeding a relatively high additional expenditure, thus limiting the usage of such mechanisms only to the most business-critical IT systems and applications. However, with the emergence of Infrastructure-as-Code practices, this paradigm can now be challenged. The objective of this thesis is to design and implement a novel disaster recovery tool, that can be used for the recovery of a web application. By following the design science methodology, this thesis proposes a primary-fallback oriented disaster recovery model, where the fallback site of the infrastructure is an empty cloud service account, into which a near duplicate copy of the primary site is recreated in the event of a disaster. The proposed recovery process consists of two phases, where the 2nd phase stateful application data recovery procedure is kept as an add-on functionality to the 1st phase stateless infrastructure management practices. For switching from primary to fallback site, the design proposes a DNS failover mechanism, whereby modifying the DNS A-record associations of the Public IP address during the start of the recovery process, traffic can be directed to the recovered site with a minimal delay. Based on the insights and data gathered during and after the evaluation phase of the proposed design, the tool created with Ansible and Terraform was found to be functional, performant and cost efficient within the known limits and expectations set by legacy disaster recovery practices

    Assessing the evidential value of artefacts recovered from the cloud

    Get PDF
    Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP

    The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Full text link
    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report
    corecore