720 research outputs found

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Data literacy in the smart university approach

    Get PDF
    Equipping classrooms with inexpensive sensors for data collection can provide students and teachers with the opportunity to interact with the classroom in a smart way. In this paper two approaches to acquiring contextual data from a classroom environment are presented. We further present our approach to analysing the collected room usage data on site, using low cost single board computer, such as a Raspberry Pi and Arduino units, performing a significant part of the data analysis on-site. We demonstrate how the usage data was used to model specifcic room usage situation as cases in a Case-based reasoning (CBR) system. The room usage data was then integrated in a room recommender system, reasoning on the formalised usage data, allowing for a convenient and intuitive end user experience based on the collected raw sensor data. Having implemented and tested our approaches we are currently investigating the possibility of using (XML)Schema-informed compression to enhance the security and efficiency of the transmission of a large number of sensor reports generated by interpreting the raw data on-site, to our central data sink. We are investigating this new approach to usage data transmission as we are aiming to integrate our on-going work into our vision of the Smart University to ensure and enhance the Smart University's data literacy

    Integrating personal media and digital TV with QoS guarantees using virtualized set-top boxes: architecture and performance measurements

    Get PDF
    Nowadays, users consume a lot of functionality in their home coming from a service provider located in the Internet. While the home network is typically shielded off as much as possible from the `outside world', the supplied services could be greatly extended if it was possible to use local information. In this article, an extended service is presented that integrates the user's multimedia content, scattered over multiple devices in the home network, into the Electronic Program Guide (EPG) of the Digital TV. We propose to virtualize the set-top box, by migrating all functionality except user interfacing to the service provider infrastructure. The media in the home network is discovered through standard Universal Plug and Play (UPnP), of which the QoS functionality is exploited to ensure high quality playback over the home network, that basically is out of the control of the service provider. The performance of the subsystems are analysed

    PREMIS Requirement Statement Project Report

    No full text
    This is the report of the PRESTA Project, the objective of which was to develop a requirements specification for preservation metadata based on the PREMIS (PREservation Metadata: Implementation Strategies) final report, the Data Dictionary for Preservation Metadata

    TopSig: Topology Preserving Document Signatures

    Get PDF
    Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a new approach to the construction of file signatures. Many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and from the theoretical perspective it positions the file signatures model in the class of Vector Space retrieval models.Comment: 12 pages, 8 figures, CIKM 201

    IETF standardization in the field of the Internet of Things (IoT): a survey

    Get PDF
    Smart embedded objects will become an important part of what is called the Internet of Things. However, the integration of embedded devices into the Internet introduces several challenges, since many of the existing Internet technologies and protocols were not designed for this class of devices. In the past few years, there have been many efforts to enable the extension of Internet technologies to constrained devices. Initially, this resulted in proprietary protocols and architectures. Later, the integration of constrained devices into the Internet was embraced by IETF, moving towards standardized IP-based protocols. In this paper, we will briefly review the history of integrating constrained devices into the Internet, followed by an extensive overview of IETF standardization work in the 6LoWPAN, ROLL and CoRE working groups. This is complemented with a broad overview of related research results that illustrate how this work can be extended or used to tackle other problems and with a discussion on open issues and challenges. As such the aim of this paper is twofold: apart from giving readers solid insights in IETF standardization work on the Internet of Things, it also aims to encourage readers to further explore the world of Internet-connected objects, pointing to future research opportunities

    The evolution of Geography Markup Language (GML) compression model

    Get PDF
    This paper will discuss about the evolution of Geography Markup Language (compression model.GML is a type of XML files normally used to store spatial data from database.However due to the huge size processing and transferring this type of file will cost performance and storage issue. Throughout the years several GML file compression model has been developed to help in addressing this problem.Four GML file compression model which are GPress, Delta Sp Compression and Extrapolation Model, GMill, and GPress++ has been selected to be discussed in this paper.In addition a comparison and the enhancement done in each model will be discussed in here.From the assessment GPress++ compression model h significant file compression rate on synthetic dataset with 77.10% improvement on gzip compressor

    MONICA in Hamburg: Towards Large-Scale IoT Deployments in a Smart City

    Full text link
    Modern cities and metropolitan areas all over the world face new management challenges in the 21st century primarily due to increasing demands on living standards by the urban population. These challenges range from climate change, pollution, transportation, and citizen engagement, to urban planning, and security threats. The primary goal of a Smart City is to counteract these problems and mitigate their effects by means of modern ICT to improve urban administration and infrastructure. Key ideas are to utilise network communication to inter-connect public authorities; but also to deploy and integrate numerous sensors and actuators throughout the city infrastructure - which is also widely known as the Internet of Things (IoT). Thus, IoT technologies will be an integral part and key enabler to achieve many objectives of the Smart City vision. The contributions of this paper are as follows. We first examine a number of IoT platforms, technologies and network standards that can help to foster a Smart City environment. Second, we introduce the EU project MONICA which aims for demonstration of large-scale IoT deployments at public, inner-city events and give an overview on its IoT platform architecture. And third, we provide a case-study report on SmartCity activities by the City of Hamburg and provide insights on recent (on-going) field tests of a vertically integrated, end-to-end IoT sensor application.Comment: 6 page
    • …
    corecore