44,616 research outputs found
VXA: A Virtual Architecture for Durable Compressed Archives
Data compression algorithms change frequently, and obsolete decoders do not
always run on new hardware and operating systems, threatening the long-term
usability of content archived using those algorithms. Re-encoding content into
new formats is cumbersome, and highly undesirable when lossy compression is
involved. Processor architectures, in contrast, have remained comparatively
stable over recent decades. VXA, an archival storage system designed around
this observation, archives executable decoders along with the encoded content
it stores. VXA decoders run in a specialized virtual machine that implements an
OS-independent execution environment based on the standard x86 architecture.
The VXA virtual machine strictly limits access to host system services, making
decoders safe to run even if an archive contains malicious code. VXA's adoption
of a "native" processor architecture instead of type-safe language technology
allows reuse of existing "hand-optimized" decoders in C and assembly language,
and permits decoders access to performance-enhancing architecture features such
as vector processing instructions. The performance cost of VXA's virtualization
is typically less than 15% compared with the same decoders running natively.
The storage cost of archived decoders, typically 30-130KB each, can be
amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table
Recommended from our members
Dialogue Games for Crosslingual Communication
We describe a novel approach to crosslingual dialogue that supports highly accurate communication of semantically complex content between people who do not speak the same language. The approach is introduced through an implemented application that covers the same ground as the chapter of a conventional phrase book for food shopping. We position the approach with respect to dialogue systems and Machine Translation-based approaches to crosslingual dialogue. The current work is offered as a first step towards the innovative use of dialogue theories for the enhancement of humanâhuman dialogue
Using Cross-Lingual Explicit Semantic Analysis for Improving Ontology Translation
Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge
Embedding machine-readable proteins interactions data in scientific articles for easy access and retrieval
Extraction of protein-protein interactions data from scientific literature remains a hard, time- and resource-consuming task. This task would be greatly simplified by embedding in the source, i.e. research articles, a standardized, synthetic, machine-readable codification for protein-protein interactions data description, to make the identification and the retrieval of such very valuable information easier, faster, and more reliable than now.
We shortly discuss how this information can be easily encoded and embedded in research papers with the collaboration of authors and scientific publishers, and propose an online demonstrative tool that shows how to help and allow authors for the easy and fast conversion of such valuable biological data into an embeddable, accessible, computer-readable codification
Predicting Network Attacks Using Ontology-Driven Inference
Graph knowledge models and ontologies are very powerful modeling and re
asoning tools. We propose an effective approach to model network attacks and
attack prediction which plays important roles in security management. The goals
of this study are: First we model network attacks, their prerequisites and
consequences using knowledge representation methods in order to provide
description logic reasoning and inference over attack domain concepts. And
secondly, we propose an ontology-based system which predicts potential attacks
using inference and observing information which provided by sensory inputs. We
generate our ontology and evaluate corresponding methods using CAPEC, CWE, and
CVE hierarchical datasets. Results from experiments show significant capability
improvements comparing to traditional hierarchical and relational models.
Proposed method also reduces false alarms and improves intrusion detection
effectiveness.Comment: 9 page
Special Libraries, January 1958
Volume 49, Issue 1https://scholarworks.sjsu.edu/sla_sl_1958/1000/thumbnail.jp
A Framework for Rapid Development and Portable Execution of Packet-Handling Applications
This paper presents a framework that enables the execution of packet-handling applications (such as sniffers, firewalls, intrusion detectors, etc.) on different hardware platforms. This framework is centered on the NetVM - a novel, portable, and efficient virtual processor targeted for packet-based processing - and the NetPDL - a language dissociating applications from protocol specifications. In addition, a high-level programming language that enables rapid development of packet-based applications is presented
- âŠ