127,421 research outputs found

    Optical memory disks in optical information processing

    Get PDF
    We describe the use of optical memory disks as elements in optical information processing architectures. The optical disk is an optical memory devicew ith a storage capacity approaching 1010b its which is naturally suited to parallel access. We discuss optical disk characteristics which are important in optical computing systems such as contrast, diffraction efficiency, and phase uniformity. We describe techniques for holographic storage on optical disks and present reconstructions of several types of computer-generated holograms. Various optical information processing architectures are described for applications such as database retrieval, neural network implementation, and image correlation. Selected systems are experimentally demonstrated

    Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks

    Get PDF
    Evolutionary deep intelligence has recently shown great promise for producing small, powerful deep neural network models via the organic synthesis of increasingly efficient architectures over successive generations. Existing evolutionary synthesis processes, however, have allowed the mating of parent networks independent of architectural alignment, resulting in a mismatch of network structures. We present a preliminary study into the effects of architectural alignment during evolutionary synthesis using a gene tagging system. Surprisingly, the network architectures synthesized using the gene tagging approach resulted in slower decreases in performance accuracy and storage size; however, the resultant networks were comparable in size and performance accuracy to the non-gene tagging networks. Furthermore, we speculate that there is a noticeable decrease in network variability for networks synthesized with gene tagging, indicating that enforcing a like-with-like mating policy potentially restricts the exploration of the search space of possible network architectures.Comment: 5 page

    Scalable Persistent Storage for Erlang

    Get PDF
    The many core revolution makes scalability a key property. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. Such architectures require scalable and available persistent storage on up to 100 hosts. We enumerate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. This analysis shows that Mnesia and CouchDB are not suitable persistent storage at our target scale, but Dynamo-like NoSQL DataBase Management Systems (DBMSs) such as Cassandra and Riak potentially are. We investigate the current scalability limits of the Riak 1.1.1 NoSQL DBMS in practice on a 100-node cluster. We establish for the first time scientifically the scalability limit of Riak as 60 nodes on the Kalkyl cluster, thereby confirming developer folklore. We show that resources like memory, disk, and network do not limit the scalability of Riak. By instrumenting Erlang/OTP and Riak libraries we identify a specific Riak functionality that limits scalability. We outline how later releases of Riak are refactored to eliminate the scalability bottlenecks. We conclude that Dynamo-style NoSQL DBMSs provide scalable and available persistent storage for Erlang in general, and for our RELEASE target architecture in particular

    Green compressive sampling reconstruction in IoT networks

    Get PDF
    In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks

    Report on the XBase Project

    Get PDF
    This project addressed the conceptual fundamentals of data storage, investigating techniques for provision of highly generic storage facilities that can be tailored to produce various individually customised storage infrastructures, compliant to the needs of particular applications. This requires the separation of mechanism and policy wherever possible. Aspirations include: actors, whether users or individual processes, should be able to bind to, update and manipulate data and programs transparently with respect to their respective locations; programs should be expressed independently of the storage and network technology involved in their execution; storage facilities should be structure-neutral so that actors can impose multiple interpretations over information, simultaneously and safely; information should not be discarded so that arbitrary historical views are supported; raw stored information should be open to all; where security restrictions on its use are required this should be achieved using cryptographic techniques. The key advances of the research were: 1) the identification of a candidate set of minimal storage system building blocks, which are sufficiently simple to avoid encapsulating policy where it cannot be customised by applications, and composable to build highly flexible storage architectures 2) insight into the nature of append-only storage components, and the issues arising from their application to common storage use-cases

    A Survey of Virtual Network Architectures

    Get PDF
    With the storage needs of the world increasing, especially with the growth of cloud computing, data centers are being utilized more than ever. The increasing need of storage has led to more use of virtualization to help intra and inter data center communications. The virtualization of physical networks is used to help achieve this goal, but with the creation of Virtual Networks, systems must be designed to create, manage, and secure them. A Virtual Network Architecture is the system design for creating and maintaining virtual network components and the resulting networks they create. Different companies design different Virtual Network Architectures, with each having potentially different use cases. In designing a Virtual Network Architecture, there are many questions about how different aspects of the system work. Questions such as how do network nodes communicate with the management system, how are the data and control planes implemented, etc. In this report, we summarize and compare the Virtual Network Architectures from different companies. These architectures are used for creating and managing Virtual Networks, some with different use cases, but most with the purpose of creating and managing virtualized networks in large data centers

    PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

    Full text link
    This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task. Code available at https://github.com/arunmallya/packne
    • …
    corecore