9,762 research outputs found

    Memory Subsystems for Security, Consistency, and Scalability

    Get PDF
    In response to the continuous demand for the ability to process ever larger datasets, as well as discoveries in next-generation memory technologies, researchers have been vigorously studying memory-driven computing architectures that shall allow data-intensive applications to access enormous amounts of pooled non-volatile memory. As applications continue to interact with increasing amounts of components and datasets, existing systems struggle to eÿciently enforce the principle of least privilege for security. While non-volatile memory can retain data even after a power loss and allow for large main memory capacity, programmers have to bear the burdens of maintaining the consistency of program memory for fault tolerance as well as handling huge datasets with traditional yet expensive memory management interfaces for scalability. Today’s computer systems have become too sophisticated for existing memory subsystems to handle many design requirements. In this dissertation, we introduce three memory subsystems to address challenges in terms of security, consistency, and scalability. Specifcally, we propose SMVs to provide threads with fne-grained control over access privileges for a partially shared address space for security, NVthreads to allow programmers to easily leverage nonvolatile memory with automatic persistence for consistency, and PetaMem to enable memory-centric applications to freely access memory beyond the traditional process boundary with support for memory isolation and crash recovery for security, consistency, and scalability

    DCDIDP: A distributed, collaborative, and data-driven intrusion detection and prevention framework for cloud computing environments

    Get PDF
    With the growing popularity of cloud computing, the exploitation of possible vulnerabilities grows at the same pace; the distributed nature of the cloud makes it an attractive target for potential intruders. Despite security issues delaying its adoption, cloud computing has already become an unstoppable force; thus, security mechanisms to ensure its secure adoption are an immediate need. Here, we focus on intrusion detection and prevention systems (IDPSs) to defend against the intruders. In this paper, we propose a Distributed, Collaborative, and Data-driven Intrusion Detection and Prevention system (DCDIDP). Its goal is to make use of the resources in the cloud and provide a holistic IDPS for all cloud service providers which collaborate with other peers in a distributed manner at different architectural levels to respond to attacks. We present the DCDIDP framework, whose infrastructure level is composed of three logical layers: network, host, and global as well as platform and software levels. Then, we review its components and discuss some existing approaches to be used for the modules in our proposed framework. Furthermore, we discuss developing a comprehensive trust management framework to support the establishment and evolution of trust among different cloud service providers. © 2011 ICST

    Separating Agent-Functioning and Inter-Agent Coordination by Activated Modules: The DECOMAS Architecture

    Full text link
    The embedding of self-organizing inter-agent processes in distributed software applications enables the decentralized coordination system elements, solely based on concerted, localized interactions. The separation and encapsulation of the activities that are conceptually related to the coordination, is a crucial concern for systematic development practices in order to prepare the reuse and systematic integration of coordination processes in software systems. Here, we discuss a programming model that is based on the externalization of processes prescriptions and their embedding in Multi-Agent Systems (MAS). One fundamental design concern for a corresponding execution middleware is the minimal-invasive augmentation of the activities that affect coordination. This design challenge is approached by the activation of agent modules. Modules are converted to software elements that reason about and modify their host agent. We discuss and formalize this extension within the context of a generic coordination architecture and exemplify the proposed programming model with the decentralized management of (web) service infrastructures

    Efficient Dynamic Access Analysis Using JavaScript Proxies

    Full text link
    JSConTest introduced the notions of effect monitoring and dynamic effect inference for JavaScript. It enables the description of effects with path specifications resembling regular expressions. It is implemented by an offline source code transformation. To overcome the limitations of the JSConTest implementation, we redesigned and reimplemented effect monitoring by taking advantange of JavaScript proxies. Our new design avoids all drawbacks of the prior implementation. It guarantees full interposition; it is not restricted to a subset of JavaScript; it is self-maintaining; and its scalability to large programs is significantly better than with JSConTest. The improved scalability has two sources. First, the reimplementation is significantly faster than the original, transformation-based implementation. Second, the reimplementation relies on the fly-weight pattern and on trace reduction to conserve memory. Only the combination of these techniques enables monitoring and inference for large programs.Comment: Technical Repor

    Monitoring-Oriented Programming: A Tool-Supported Methodology for Higher Quality Object-Oriented Software

    Get PDF
    This paper presents a tool-supported methodological paradigm for object-oriented software development, called monitoring-oriented programming and abbreviated MOP, in which runtime monitoring is a basic software design principle. The general idea underlying MOP is that software developers insert specifications in their code via annotations. Actual monitoring code is automatically synthesized from these annotations before compilation and integrated at appropriate places in the program, according to user-defined configuration attributes. This way, the specification is checked at runtime against the implementation. Moreover, violations and/or validations of specifications can trigger user-defined code at any points in the program, in particular recovery code, outputting or sending messages, or raising exceptions. The MOP paradigm does not promote or enforce any specific formalism to specify requirements: it allows the users to plug-in their favorite or domain-specific specification formalisms via logic plug-in modules. There are two major technical challenges that MOP supporting tools unavoidably face: monitor synthesis and monitor integration. The former is heavily dependent on the specification formalism and comes as part of the corresponding logic plug-in, while the latter is uniform for all specification formalisms and depends only on the target programming language. An experimental prototype tool, called Java-MOP, is also discussed, which currently supports most but not all of the desired MOP features. MOP aims at reducing the gap between formal specification and implementation, by integrating the two and allowing them together to form a system

    Towards Exascale Scientific Metadata Management

    Full text link
    Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadata-oblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions
    corecore