154 research outputs found

    BaseFs - Basically Acailable, Soft State, Eventually Consistent Filesystem for Cluster Management

    Get PDF
    A peer-to-peer distributed filesystem for community cloud management. https://github.com/glic3rinu/basef

    ANANAS - A Framework For Analyzing Android Applications

    Full text link
    Android is an open software platform for mobile devices with a large market share in the smartphone sector. The openness of the system as well as its wide adoption lead to an increasing amount of malware developed for this platform. ANANAS is an expandable and modular framework for analyzing Android applications. It takes care of common needs for dynamic malware analysis and provides an interface for the development of plugins. Adaptability and expandability have been main design goals during the development process. An abstraction layer for simple user interaction and phone event simulation is also part of the framework. It allows an analyst to script the required user simulation or phone events on demand or adjust the simulation to his needs. Six plugins have been developed for ANANAS. They represent well known techniques for malware analysis, such as system call hooking and network traffic analysis. The focus clearly lies on dynamic analysis, as five of the six plugins are dynamic analysis methods.Comment: Paper accepted at First Int. Workshop on Emerging Cyberthreats and Countermeasures ECTCM 201

    GRIDSITE

    Get PDF
    GridSite provides grid credential, proxy certificate and delegation support for web-based application

    RestFS: The Filesystem as a Connector Abstraction for Flexible Resource and Service Composition

    Get PDF
    The broader context for this chapter comprises business scenarios requiring resource and/or service composition, such as (intra-company) enterprise application integration (EAI) and (inter-company) web service orchestration. The resources and services involved vary widely in terms of the protocols they support, which typically fall into remote procedure call (RPC)~\citeBirrell84implementingremote, resource-oriented (HTTP~\citeFielding96hypertexttransfer and WEBDAV~\citewebdav) and message-oriented protocols. By recognizing the similarity between web-based resources and the kind of resources exposed in the form of filesystems in operating systems, we have found it feasible to map the former to the latter using a uniform, configurable connector layer. Once a remote resource has been exposed in the form of a local filesystem, one can access the resource programmatically using the operating system\textquoterights standard filesystem application programming interface (API). Taking this idea one step further, one can then aggregate or otherwise orchestrate two or more remote resources using the same standard API. Filesystem APIs are available in all major operating systems. Some of those, most notably, all flavors of UNIX including GNU/Linux, have a rich collection of small, flexible command-line utilities, as well as various inter-process communication (IPC) mechanisms. These tools can be used in scripts and programs that compose the various underlying resources in powerful ways. Further explorations of the role of a filesystem-based connector layer in the enterprise application architecture have lead us to the question whether one can achieve a fully compositional, arbitrarily deep hierarchical architecture by re-exposing the aggregated resources as a single, composite resource that, in turn, can be accessed in the same form as the original resources. This is indeed possible in two flavors: 1) the composite resource can be exposed internally as a filesystem for further local composition; 2) the composite resource is exposed externally as a restful resource for further external composition. We expect the ability hierarchically to compose resources to facilitate the construction of complex, robust resource- and service-oriented software systems, and we hope that concrete case studies will further substantiate our position

    An SNMP filesystem in userspace

    Get PDF
    Modern computer networks are constantly increasing in size and complexity. Despite this, data networks are a critical factor for the success of many organizations. Monitoring their health and operation sta- tus is fundamental, and usually performed through specific network man- agement architectures, developed and standardized in the last decades. On the other hand, file systems have become one of the best well known paradigms of human-computer interaction, and have been around since early days in the personal computer industry. In this paper we propose a file system interface to network management information, allowing users to open, edit and visualize network and systems operation information

    Fast and Service-preserving Recovery from Malware Infections using CRIU

    Get PDF
    Once a computer system has been infected with malware, restoring it to an uninfected state often requires costly service-interrupting actions such as rolling back to a stable snapshot or reimaging the system entirely. We present CRIU-MR: a technique for restoring an infected server system running within a Linux container to an uninfected state in a service-preserving manner using Checkpoint/Restore in Userspace (CRIU). We modify the CRIU source code to flexibly integrate with existing malware detection technologies so that it can remove suspected malware processes within a Linux container during a checkpoint/restore event. This allows for infected containers with a potentially damaged filesystem to be checkpointed and subsequently restored on a fresh backup filesystem while both removing malware processes and preserving the state of trusted ones. This method can be quickly performed with minimal impact on service availability, restoring active TCP connections and completely removing several types of malware from infected Linux containers

    Fedora Commons With Apache Hadoop: A Research Study

    Get PDF
    The Digital Collections digital repository at the University of Maryland Libraries is growing and in need of a new backend storage system to replace the current filesystem storage. Though not a traditional storage management system, we chose to evaluate Apache Hadoop because of its large and growing community and software ecosystem. Additionally, Hadoop’s capabilities for distributed computation could prove useful in providing new kinds of digital object services and maintenance for ever increasing amounts of data. We tested storage of Fedora Commons data in the Hadoop Distributed File System (HDFS) using an early development version of Akubra-HDFS interface created by Frank Asseg. This article examines the findings of our research study, which evaluated Fedora-Hadoop integration in the areas of performance, ease of access, security, disaster recovery, and costs

    Navigating Unmountable Media with the Digital Forensics XML File System

    Get PDF
    Some computer storage is non-navigable by current general-purpose computers. This could be because of obsolete interface software, or a more specialized storage system lacking widespread support. These storage systems may contain artifacts of great cultural, historical, or technical significance, but implementing compatible interfaces that are fully navigable may be beyond available resources. We developed the DFXML File System (DFXMLFS) to enable navigation of arbitrary storage systems that fulfill a minimum feature set of the POSIX file system standard. Our approach advocates for a two-step workflow that separates parsing the storage’s file system structures from navigating the storage like a contemporary file system, including file contents. The parse extracts essential file system metadata, serializing to Digital Forensics XML for later consumption as a read-only file system

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201
    • …
    corecore