32,326 research outputs found

    Improved metrics collection and correlation for the CERN cloud storage test framework

    Get PDF
    Storage space is one of the most important ingredients that the European Organization for Nuclear Research (CERN) needs for its experiments and operation. Part of the Data & Storage Services (IT-DSS) group’s work at CERN is focused on testing and evaluating the cloud storage system that is provided by the openlab partner Huawei, Huawei Universal Disk Storage System (UDS). As a whole, the system consists of both software and hardware. The objective of the Huawei-CERN partnership is to investigate the performance of the cloud storage system. Among the interesting questions are the system’s scalability, reliability and ability to store and retrieve files. During the tests, possible bugs and malfunctions can be discovered and corrected. Different versions of the storage software that runs inside the storage system can also be compared to each other. The nature of testing and benchmarking a storage system gives rise to several small tasks that can be done during a short summer internship. In order to test the storage system a test framework developed by the DSS group is used. The framework consists of various types of file transfer tests, client and server monitoring programs and log file analysis programs. Part of the work done was additions to the existing framework and part was developing new tools. Metrics collection was the central theme. Metrics are to be understood as system statistics, such as memory consumption or processor usage. Memory usage and disk reads/writes were added to the existing client real-time monitoring framework. CPU and memory usage, network traffic (bytes received/sent) and the number of processes running are collected from a client computer before and after a daily test. Two other additions are visualization for storage system log files, as well as a new monitoring tool for the storage system. This report is divided into parts describing each part of the framework that was improved or added, the problem and the final solution. A short description of the code and the architecture are also included

    The zombies strike back: Towards client-side beef detection

    Get PDF
    A web browser is an application that comes bundled with every consumer operating system, including both desktop and mobile platforms. A modern web browser is complex software that has access to system-level features, includes various plugins and requires the availability of an Internet connection. Like any multifaceted software products, web browsers are prone to numerous vulnerabilities. Exploitation of these vulnerabilities can result in destructive consequences ranging from identity theft to network infrastructure damage. BeEF, the Browser Exploitation Framework, allows taking advantage of these vulnerabilities to launch a diverse range of readily available attacks from within the browser context. Existing defensive approaches aimed at hardening network perimeters and detecting common threats based on traffic analysis have not been found successful in the context of BeEF detection. This paper presents a proof-of-concept approach to BeEF detection in its own operating environment – the web browser – based on global context monitoring, abstract syntax tree fingerprinting and real-time network traffic analysis

    Express: a web-based technology to support human and computational experimentation

    Get PDF
    Experimental cognitive psychology has been greatly assisted by the development of general computer-based experiment presentation packages. Typically, however, such packages provide little support for running participants on different computers. It is left to the experimenter to ensure that group sizes are balanced between conditions and to merge data gathered on different computers once the experiment is complete. Equivalent issues arise in the evaluation of parameterized computational models, where it is frequently necessary to test a model's behavior over a range of parameter values (which amount to between-subjects factors) and where such testing can be speeded up significantly by the use of multiple processors. This article describes Express, a Web-based technology for coordinating "clients" (human participants or computational models) and collating client data. The technology provides an experiment design editor, client coordination facilities (e.g., automated randomized assignment of clients to groups so that group sizes are balanced), general data collation and tabulation facilities, a range of basic statistical functions (which are constrained by the specified experimental design), and facilities to export data to standard statistical packages (such as SPSS). We report case studies demonstrating the utility of Express in both human and computational experiments. Express may be freely downloaded from the Express Web site (http://express.psyc.bbk.ac.uk/)

    A Fault Tolerant, Dynamic and Low Latency BDII Architecture for Grids

    Full text link
    The current BDII model relies on information gathering from agents that run on each core node of a Grid. This information is then published into a Grid wide information resource known as Top BDII. The Top level BDIIs are updated typically in cycles of a few minutes each. A new BDDI architecture is proposed and described in this paper based on the hypothesis that only a few attribute values change in each BDDI information cycle and consequently it may not be necessary to update each parameter in a cycle. It has been demonstrated that significant performance gains can be achieved by exchanging only the information about records that changed during a cycle. Our investigations have led us to implement a low latency and fault tolerant BDII system that involves only minimal data transfer and facilitates secure transactions in a Grid environment.Comment: 18 pages; 10 figures; 4 table

    ONLINE MONITORING USING KISMET

    Get PDF
    Colleges and universities currently use online exams for student evaluation. Stu- dents can take assigned exams using their laptop computers and email their results to their instructor; this process makes testing more efficient and convenient for both students and faculty. However, taking exams while connected to the Internet opens many opportunities for plagiarism and cheating. In this project, we design, implement, and test a tool that instructors can use to monitor the online activity of students during an in-class online examination. This tool uses a wireless sniffer, Kismet, to capture and classify packets in real time. If a student attempts to access a site that is not allowed, the instructor is notified via an Android application or via Internet. Identifying a student who is cheating is challenging since many applications send packets without user intervention. We provide experimental results from realistic test environments to illustrate the success of our proposed approach

    Predicting Intermediate Storage Performance for Workflow Applications

    Full text link
    Configuring a storage system to better serve an application is a challenging task complicated by a multidimensional, discrete configuration space and the high cost of space exploration (e.g., by running the application with different storage configurations). To enable selecting the best configuration in a reasonable time, we design an end-to-end performance prediction mechanism that estimates the turn-around time of an application using storage system under a given configuration. This approach focuses on a generic object-based storage system design, supports exploring the impact of optimizations targeting workflow applications (e.g., various data placement schemes) in addition to other, more traditional, configuration knobs (e.g., stripe size or replication level), and models the system operation at data-chunk and control message level. This paper presents our experience to date with designing and using this prediction mechanism. We evaluate this mechanism using micro- as well as synthetic benchmarks mimicking real workflow applications, and a real application.. A preliminary evaluation shows that we are on a good track to meet our objectives: it can scale to model a workflow application run on an entire cluster while offering an over 200x speedup factor (normalized by resource) compared to running the actual application, and can achieve, in the limited number of scenarios we study, a prediction accuracy that enables identifying the best storage system configuration

    The transition to a recovery based service: exploring the perspectives and practices of staff

    Get PDF
    • …
    corecore