77 research outputs found
The Afterlife of Software
Death on the internet is not limited to human death. The business model of planned obsolescence, the technical work of preserving old websites, systems, and applications, as well as a cultural emphasis on the new and immediate all combine to make the internet a place where many software technologies have gone to die. Networked modes of living engender networked modes of loss, and a key question is how our connection to the past is reconfigured when software dies. In terms of digital preservation strategies, emulation may also be distinguished from migration, or periodically moving data and software to new environments, “rewriting” them as required. Software does not end with source code, nor with electronic pulses producing material changes in underlying hardware and storage media. If bottom-up, continuous preservation is the way forward, then software’s afterlife will depend not just on the work of a few heritage institutions
Privacy Enforcement in a Cost-Effective Smart Grid
In this technical report we present the current state of the research conducted during the first part of the PhD period. The PhD thesis “Privacy Enforcement in a Cost-Effective Smart Grid” focuses on ensuring privacy when generating market for energy service providers that develop web services for the residential domain in the envisaged smart grid. The PhD project is funded and associated to the EU project “Energy Demand Aware Open Services for Smart Grid Intelligent Automation” (SmartHG) and therefore introduces the project on a system-level. Based on this, we present some of the integration, security and privacy challenges that emerge when designing a system architecture and infrastructure. The resulting architecture is a consumer-centric and agent-based design and uses open Internet-based communication protocols for enabling interoperability while being cost-effective. Finally, the PhD report presentthe envisaged future work and publications that will lead to completion of the PhD study
A Blockchain Application Prototype for the Internet of Things
The emergence of the Internet of things (IoT), associated with the explosion in the number of connected objects, and the growth in user needs, makes the Internet network very complex. IoT objects are diverse and heterogeneous, which requires establishing interoperability and efficient identity management on the one hand. On the other hand, centralized architectures such as cloud-based ones can have overhead and high latency, with a potential risk of failure. Facing these challenges, Blockchain technology, with its decentralized architecture based on a distributed peer-to-peer network, offers a new infrastructure that allows IoT objects to interact reliably and securely. In this paper, a new approach is proposed with a three-layer architecture: layer of sensing and collection of data made up of the IoT network, layer of processing and saving of data exchanges at the Blockchain level, and access and visualization layer via a web interface. The prototype implemented in this study allows all transactions (data exchanges) generated by IoT devices to be recorded and stored on a dedicated Blockchain, assuring the security of IoT objects\u27 communications. This prototype also enables access to and visualization of all data and information, thus enhancing the IoT network\u27s transparency
Organic over-the-horizon targeting for the 2025 surface fleet
Please note that this activity was not conducted in accordance with Federal, DOD,
and Navy Human Research Protection RegulationsAdversarial advances in the proliferation of anti-access/area-denial (A2/AD) techniques requires an innovative approach to the design of a maritime system of systems capable of detecting, classifying, and engaging targets in support of organic over-the-horizon (OTH) tactical offensive operations in the 2025–2030 timeframe. Using a systems engineering approach, this study considers manned and unmanned systems in an effort to develop an organic OTH targeting capability for U.S. Navy surface force structures of the future. Key attributes of this study include overall system requirements, limitations, operating area considerations, and issues of interoperability and compatibility. Multiple alternative system architectures are considered and analyzed for feasibility. The candidate architectures include such systems as unmanned aerial vehicles (UAVs), as well as prepositioned undersea and low-observable surface sensor and communication networks. These unmanned systems are expected to operate with high levels of autonomy and should be designed to provide or enhance surface warfare OTH targeting capabilities using emerging extended-range surface-to-surface weapons. This report presents the progress and results of the SEA-21A capstone project with the recommendation that the U.S. Navy explore the use of modestly-sized, network-centric UAVs to enhance the U.S. Navy’s ability to conduct surface-based OTH tactical offensive operations by 2025.http://archive.org/details/organicovertheho1094545933Approved for public release; distribution is unlimited
Recommended from our members
End-to-end deep reinforcement learning in computer systems
Abstract
The growing complexity of data processing systems has long led systems designers to imagine systems (e.g. databases, schedulers) which can self-configure and adapt based on environmental cues. In this context, reinforcement learning (RL) methods have since their inception appealed to systems developers. They promise to acquire complex decision policies from raw feedback signals. Despite their conceptual popularity, RL methods are scarcely found in real-world data processing systems. Recently, RL has seen explosive growth in interest due to high profile successes when utilising large neural networks (deep reinforcement learning). Newly emerging machine learning frameworks and powerful hardware accelerators have given rise to a plethora of new potential applications.
In this dissertation, I first argue that in order to design and execute deep RL algorithms efficiently, novel software abstractions are required which can accommodate the distinct computational patterns of communication-intensive and fast-evolving algorithms. I propose an architecture which decouples logical algorithm construction from local and distributed execution semantics. I further present RLgraph, my proof-of-concept implementation of this architecture. In RLgraph, algorithm developers can explore novel designs by constructing a high-level data flow graph through combination of logical components. This dataflow graph is independent of specific backend frameworks or notions of execution, and is only later mapped to execution semantics via a staged build process. RLgraph enables high-performing algorithm implementations while maintaining flexibility for rapid prototyping.
Second, I investigate reasons for the scarcity of RL applications in systems themselves. I argue that progress in applied RL is hindered by a lack of tools for task model design which bridge the gap between systems and algorithms, and also by missing shared standards for evaluation of model capabilities. I introduce Wield, a first-of-its-kind tool for incremental model design in applied RL. Wield provides a small set of primitives which decouple systems interfaces and deployment-specific configuration from representation. Core to Wield is a novel instructive experiment protocol called progressive randomisation which helps practitioners to incrementally evaluate different dimensions of non-determinism. I demonstrate how Wield and progressive randomisation can be used to reproduce and assess prior work, and to guide implementation of novel RL applications
The use of systems engineering principles for the integration of existing models and simulations
With the rise in computational power, the prospect of simulating a complex engineering system with a high degree of accuracy and in a meaningful way is becoming a real possibility. Modelling and simulation have become ubiquitous throughout the engineering life cycle, as a consequence there are many thousands of existing models and simulations that are potential candidates for integration. This work is concerned with ascertaining if systems engineering principles are of use in the support of virtual testing, from desire to test, designing experiments, specifying simulations, selecting models and simulations, integrating component parts, verifying that the work is as specified, and validating that any outcomes are meaningful. A novel representation of systems engineering framework is proposed and forms the bases for the methods that were developed. It takes the core systems engineering principles and expresses them in a way that can be implemented in a variety of ways. An end to end process for virtual testing with the potential to use existing models and simulations is proposed, it provides structure and order to the testing task. A key part of the proposed process is the recognition that models and simulations requirements are different from those of the system being designed, and hence a modelling and simulation specific writing guide is produced. The automation of any engineering task has the potential to reduce the time to market of the final product, for this reason the potential of natural language processing technology to hasten the proposed processes was investigated. Two case studies were selected to test and demonstrate the potential of the novel approach, the first being an investigation into material selection for a squash ball, and the second being automotive in nature concerned with combining steering and braking systems. The processes and methods indicated their potential value, especially in the automotive case study where inconsistences were identified that could have otherwise affected the successful integration. This capability, combined with the verification stages, improves the confidence of any model and simulation integration. The NLP proof of concept software also demonstrated that such technology has value in the automation of integration. With further testing and development there is the possibility to create a software package to guide engineers through the difficult task of virtual testing. Such a tool would have the potential to drastically reduce the time to market of complex products
- …