1,413,182 research outputs found

    Hierarchical Composition of Memristive Networks for Real-Time Computing

    Get PDF
    Advances in materials science have led to physical instantiations of self-assembled networks of memristive devices and demonstrations of their computational capability through reservoir computing. Reservoir computing is an approach that takes advantage of collective system dynamics for real-time computing. A dynamical system, called a reservoir, is excited with a time-varying signal and observations of its states are used to reconstruct a desired output signal. However, such a monolithic assembly limits the computational power due to signal interdependency and the resulting correlated readouts. Here, we introduce an approach that hierarchically composes a set of interconnected memristive networks into a larger reservoir. We use signal amplification and restoration to reduce reservoir state correlation, which improves the feature extraction from the input signals. Using the same number of output signals, such a hierarchical composition of heterogeneous small networks outperforms monolithic memristive networks by at least 20% on waveform generation tasks. On the NARMA-10 task, we reduce the error by up to a factor of 2 compared to homogeneous reservoirs with sigmoidal neurons, whereas single memristive networks are unable to produce the correct result. Hierarchical composition is key for solving more complex tasks with such novel nano-scale hardware

    Too far ahead of its time: Barclays, Burroughs and real-time banking

    Get PDF
    The historiography of computing has until now considered real-time computing in banking as predicated on the possibilities of networked ATMs in the 1970s. This article reveals a different story. It exposes the failed bid by Barclays and Burroughs to make real time a reality for British banking in the 1960s

    Cloud Chaser: Real Time Deep Learning Computer Vision on Low Computing Power Devices

    Full text link
    Internet of Things(IoT) devices, mobile phones, and robotic systems are often denied the power of deep learning algorithms due to their limited computing power. However, to provide time-critical services such as emergency response, home assistance, surveillance, etc, these devices often need real-time analysis of their camera data. This paper strives to offer a viable approach to integrate high-performance deep learning-based computer vision algorithms with low-resource and low-power devices by leveraging the computing power of the cloud. By offloading the computation work to the cloud, no dedicated hardware is needed to enable deep neural networks on existing low computing power devices. A Raspberry Pi based robot, Cloud Chaser, is built to demonstrate the power of using cloud computing to perform real-time vision tasks. Furthermore, to reduce latency and improve real-time performance, compression algorithms are proposed and evaluated for streaming real-time video frames to the cloud.Comment: Accepted to The 11th International Conference on Machine Vision (ICMV 2018). Project site: https://zhengyiluo.github.io/projects/cloudchaser

    Laboratories and Real-Time Computing

    Get PDF
    The paper describes the approach used at the Department of Automatic Control at Lund Institute of Technology to maintain a high level of practical laboratory experiments. The Department integrates laboratory experiments into its control courses. If the laboratory exercises are properly organized and the student volume is sufficiently large, it is possible to combine a high level of practical laboratory experiments in control education at a reasonable cost. The use of off-the-shelf hardware and open-source software are important, With desktop processes, it is possible to achieve high utilization of lab space and high student throughput

    A real-time facial expression recognition system for affective computing

    Get PDF
    A thesis submitted to the University of London in partial fulfillment to the degree of Doctor of Philosophy

    Optimal Control of Wireless Computing Networks

    Full text link
    Augmented information (AgI) services allow users to consume information that results from the execution of a chain of service functions that process source information to create real-time augmented value. Applications include real-time analysis of remote sensing data, real-time computer vision, personalized video streaming, and augmented reality, among others. We consider the problem of optimal distribution of AgI services over a wireless computing network, in which nodes are equipped with both communication and computing resources. We characterize the wireless computing network capacity region and design a joint flow scheduling and resource allocation algorithm that stabilizes the underlying queuing system while achieving a network cost arbitrarily close to the minimum, with a tradeoff in network delay. Our solution captures the unique chaining and flow scaling aspects of AgI services, while exploiting the use of the broadcast approach coding scheme over the wireless channel.Comment: 30 pages, journa

    Vehicular Fog Computing Enabled Real-time Collision Warning via Trajectory Calibration

    Full text link
    Vehicular fog computing (VFC) has been envisioned as a promising paradigm for enabling a variety of emerging intelligent transportation systems (ITS). However, due to inevitable as well as non-negligible issues in wireless communication, including transmission latency and packet loss, it is still challenging in implementing safety-critical applications, such as real-time collision warning in vehicular networks. In this paper, we present a vehicular fog computing architecture, aiming at supporting effective and real-time collision warning by offloading computation and communication overheads to distributed fog nodes. With the system architecture, we further propose a trajectory calibration based collision warning (TCCW) algorithm along with tailored communication protocols. Specifically, an application-layer vehicular-to-infrastructure (V2I) communication delay is fitted by the Stable distribution with real-world field testing data. Then, a packet loss detection mechanism is designed. Finally, TCCW calibrates real-time vehicle trajectories based on received vehicle status including GPS coordinates, velocity, acceleration, heading direction, as well as the estimation of communication delay and the detection of packet loss. For performance evaluation, we build the simulation model and implement conventional solutions including cloud-based warning and fog-based warning without calibration for comparison. Real-vehicle trajectories are extracted as the input, and the simulation results demonstrate that the effectiveness of TCCW in terms of the highest precision and recall in a wide range of scenarios

    GPU-based Real-time Triggering in the NA62 Experiment

    Full text link
    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have been analyzed, pointing out that networking is the most critical one. To keep the latency of data transfer task under control, we devised NaNet, an FPGA-based PCIe Network Interface Card (NIC) with GPUDirect capabilities. For the processing task, we developed specific multiple ring trigger algorithms to leverage the parallel architecture of GPUs and increase the processing throughput to keep up with the high event rate. Results obtained during the first months of 2016 NA62 run are presented and discussed

    Real-Time Virtualization and Cloud Computing

    Get PDF
    In recent years, we have observed three major trends in the development of complex real-time embedded systems. First, to reduce cost and enhance flexibility, multiple systems are sharing common computing platforms via virtualization technology, instead of being deployed separately on physically isolated hosts. Second, multi-core processors are increasingly being used in real-time systems. Third, developers are exploring the possibilities of deploying real-time applications as virtual machines in a public cloud. The integration of real-time systems as virtual machines (VMs) atop common multi-core platforms in a public cloud raises significant new research challenges in meeting the real-time latency requirements of applications. In order to address the challenges of running real-time VMs in the cloud, we first present RT-Xen, a novel real-time scheduling framework within the popular Xen hypervisor. We start with single-core scheduling in RT-Xen, and present the first work that empirically studies and compares different real-time scheduling schemes on a same platform. We then introduce RT-Xen 2.0, which focuses on multi-core scheduling and spanning multiple design spaces, including priority schemes, server schemes, and scheduling policies. Experimental results demonstrate that when combined with compositional scheduling theory, RT-Xen can deliver real-time performance to an application running in a VM, while the default credit scheduler cannot. After that, we present RT-OpenStack, a cloud management system designed to support co-hosting real-time and non-real-time VMs in a cloud. RT-OpenStack studies the problem of running real-time VMs together with non-real-time VMs in a public cloud. Leveraging the resource interface and real-time scheduling provided by RT-Xen, RT-OpenStack provides real-time performance guarantees to real-time VMs, while achieving high resource utilization by allowing non-real-time VMs to share the remaining CPU resources through a novel VM-to-host mapping scheme. Finally, we present RTCA, a real-time communication architecture for VMs sharing a same host, which maintains low latency for high priority inter-domain communication (IDC) traffic in the face of low priority IDC traffic
    • …
    corecore