715 research outputs found

    Neuromorphic analogue VLSI

    Get PDF
    Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do

    Functional partitioning of multi-processor architectures

    Get PDF
    Many real-time computations such as process control and robotic applications may be naturally distributed in a functional manner. One way of ensuring good performance, reliability and security of operation is to map or distribute such tasks onto a distributed, multi-processor system. The time-critical task is thus functionally partitioned into a set of cooperating sub-tasks. These sub-tasks run concurrently and asynchronously on different nodes (stations) of the system. The software design and support of such a functional distribution of sub-tasks (processes) depends on the degree of interaction of these processes among the different nodes. [Continues.

    Cockpit Ocular Recording System (CORS)

    Get PDF
    The overall goal was the development of a Cockpit Ocular Recording System (CORS). Four tasks were used: (1) the development of the system; (2) the experimentation and improvement of the system; (3) demonstrations of the working system; and (4) system documentation. Overall, the prototype represents a workable and flexibly designed CORS system. For the most part, the hardware use for the prototype system is off-the-shelf. All of the following software was developed specifically: (1) setup software that the user specifies the cockpit configuration and identifies possible areas in which the pilot will look; (2) sensing software which integrates the 60 Hz data from the oculometer and heat orientation sensing unit; (3) processing software which applies a spatiotemporal filter to the lookpoint data to determine fixation/dwell positions; (4) data recording output routines; and (5) playback software which allows the user to retrieve and analyze the data. Several experiments were performed to verify the system accuracy and quantify system deficiencies. These tests resulted in recommendations for any future system that might be constructed

    An investigation into buffer management mechanisms for the Diffserv assured forwarding traffic class

    Get PDF
    Includes bibliographical references.One of the service classes offered by Diffserv is the Assured Forwarding (AF) class. Because of scalability concerns, IETF specifications recommend that microflow and aggregate-unaware active buffer management mechanisms such as RIO (Random early detecLion with ln/Out-ofprofile) be used in the core of Diffserv networks implementing AF. Such mechanisms have, however, been shown to provide poor performance with regard to fairness, stability and network controL Furthermore, recent advances in router technology now allow routers to implement more advanced scheduling and buffer management mechanisms on high-speed ports. This thesis evaluates the performance improvements that may be realized when implementing the Diffserv AF core using a hierarchical microflow and aggregate aware buffer management mechanism instead of RIO. The author motivates, proposes and specifies such a mechanism. The mechanism. referred to as H-MAQ or Hierarchical multi drop-precedence queue state Microflow-Aware Quelling, is evaluated on a testbed that compares the performance of a RIO network core with an H-MAQ network core

    RHINO ARM cluster control management system

    Get PDF

    Energy Measurement and Profiling of Internet of Things Devices

    Get PDF
    As technological improvements in hardware and software have grown in leaps and bounds, the presence of IoT devices has been increasing at a fast rate. Profiling and minimizing energy consumption on these devices remains to be an an essential step towards employing them in various application domains. Due to the large size and high cost of commercial energy measurement platforms, the research community has proposed alternative solutions that aim to be simple, accurate, and user friendly. However, these solutions are either costly, have a limited measurement range, or low accuracy. In addition, minimizing energy consumption in IoT devices is paramount to their wide deployment in various IoT scenarios. Energy saving methods such as duty-cycling aim to address this constraint by limiting the amount of time the device is powered on. This process needs to be optimized, as devices are now able to perform complex, but energy intensive tasks due to advancements in hardware. The contributions of this paper are two-fold. First we develop an energy measurement platform for IoT devices. This platform should be accurate, low-cost, easy to build, and configurable in order to scale to the high volume and varying requirements for IoT devices. The second contribution is improving the energy consumption on a Linux-based IoT device in a duty-cycled scenario. It is important to profile and optimize boot up time and shutdown time, and improve the way user applications are executed. EMPIOT is an accurate, low-cost, easy to build, and flexible power measurement platform. We present the hardware and software components that comprise EMPIOT and then study the effect of various design parameters on accuracy. In particular, we analyze the effect of driver, bus speed, input voltage, and buffering mechanisms on sampling rate, measurement accuracy, and processing demand. In addition to this, we also propose a novel calibration technique and report the calibration parameters under different settings. In order to demonstrate EMPIOT\u27s scalability, we evaluate its performance against a ground truth on five different devices. Our results show that for very low-power devices that utilize 802.15.4 wireless standard, measurement error is less than 4%. In addition, we obtain less than 3% error for 802.11-based devices that generate short and high power spikes. The second contribution is the optimization the energy consumption of IoT devices in a duty cycled scenario by reducing boot up duration, shutdown duration, and user application duration. To this end, we study and improve the amount of time a Linux-based IoT device is powered on to accomplish its tasks. We analyze the processes of system boot up and shutdown on two platforms, the Raspberry Pi 3 and Raspberry Pi Zero Wireless, and enhance duty-cycling performance by identifying and disabling time consuming or unnecessary units initialized in the userspace. We also study whether SD card speed and SD card capacity utilization affect boot up duration and energy consumption. In addition, we propose Pallex, a novel parallel execution framework built on top of the systemd init system to run a user application concurrently with userspace initialization. We validate the performance impact of Pallex when applied to various IoT application scenarios: (i) capturing an image, (ii) capturing and encrypting an image, (iii) capturing and classifying an image using the the k-nearest neighbor algorithm, and (iv) capturing images and sending them to a cloud server. Our results show that system lifetime is increased by 18.3%, 16.8%, 13.9% and 30.2%, for these application scenarios, respectively

    GPU Accelerated protocol analysis for large and long-term traffic traces

    Get PDF
    This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark

    Interoperability Reference Models for Applications of Artificial Intelligence in Medical Imaging

    Get PDF
    Medical imaging is currently being applied in artificial intelligence and big data technologies in data formats. In order for medical imaging collected from different institutions and systems to be used for artificial intelligence data, interoperability is becoming a key element. Whilst interoperability is currently guaranteed through medical data standards, compliance to personal information protection laws, and other methods, a standard solution for measurement values is deemed to be necessary in order for further applications as artificial intelligence data. As a result, this study proposes a model for interoperability in medical data standards, personal information protection methods, and medical imaging measurements. This model applies Health Level Seven (HL7) and Digital Imaging and Communications in Medicine (DICOM) standards to medical imaging data standards and enables increased accessibility towards medical imaging data in the compliance of personal information protection laws through the use of de-identifying methods. This study focuses on offering a standard for the measurement values of standard materials that addresses uncertainty in measurements that pre-existing medical imaging measurement standards did not provide. The study finds that medical imaging data standards conform to pre-existing standards and also provide protection to personal information within any medical images through de-identifying methods. Moreover, it proposes a reference model that increases interoperability by composing a process that minimizes uncertainty using standard materials. The interoperability reference model is expected to assist artificial intelligence systems using medical imaging and further enhance the resilience of future health technologies and system development.ope

    Multi-community command and control systems in law enforcement: An introductory planning guide

    Get PDF
    A set of planning guidelines for multi-community command and control systems in law enforcement is presented. Essential characteristics and applications of these systems are outlined. Requirements analysis, system concept design, implementation planning, and performance and cost modeling are described and demonstrated with numerous examples. Program management techniques and joint powers agreements for multicommunity programs are discussed in detail. A description of a typical multi-community computer-aided dispatch system is appended
    • ā€¦
    corecore