7 research outputs found

    Accelerating Deep Convolutional Neural Networks Using Specialized Hardware

    No full text
    Abstract Recent breakthroughs in the development of multi-layer convolutional neural networks have led to stateof-the-art improvements in the accuracy of non-trivial recognition tasks such as large-category image classification and automatic speech recognition Hardware specialization in the form of GPGPUs, FPGAs, and ASICs 1 offers a promising path towards major leaps in processing capability while achieving high energy efficiency. To harness specialization, an effort is underway at Microsoft to accelerate Deep Convolutional Neural Networks (CNN) using servers augmented with FPGAs-similar to the hardware that is being integrated into some of Microsoft's datacenter

    Comparametric HDR (High Dynamic Range) Imaging for Digital Eye Glass, Wearable Cameras,

    No full text
    Abstract—Wearable computing can be used to both extend the range of human perception, and to share sensory experiences with others. For this objective to be made practical, engineering considerations such as form factor, computational power, and power consumption are critical concerns. In this work, we consider the design of a low-power visual seeing aid, and how to implement computationally-intensive computational photography algorithms in a small form factor with low power consumption. We present realtime an FPGA-based HDR (High Dynamic Range) video processing and filtering by integrating tonal and spatial information obtained from multiple different exposures of the same subject matter. In this embodiment the system captures, in rapid succession, sets of three exposures, “dark”, “medium”, and “light”, over and over again, e.g. “dark”, “medium”, “light”, “dark”, “medium”, “light”, and so on, at 60 frames per second. These exposures are used to determine an estimate of the photoquantity every 1/60th of a second (each time a frame comes in, an estimate goes out). This allows us to build a seeing aid that helps people see better in high contrast scenes, for example, while welding, or in outdoor scenes, or scenes where a bright light is shining directly into the eyes of the wearer. Our system is suitable for being built into eyeglasses or small camera-based, lifeglogging, or gesturesensing pendants, and other miniature wearable devices, with low-power and compact circuits that can be easily mounted on the body. I

    Realtime HDR (high dynamic range) video for eyetap wearable computers, FPGA-based seeing aids, and glasseyes

    No full text
    Realtime video HDR (High Dynamic Range) is presented in the context of a seeing aid designed originally for task-specific use (e.g. electric arc welding). It can also be built into regular eyeglasses to help people see better in everyday life. Our prototype consists of an EyeTap (electric glasses) welding helmet, with a wearable computer upon which are implemented a set of image processing algorithms that implement realtime HDR (High Dynamic Range) image processing together with applications such as mediated reality, augmediated T M, and augmented reality. The HDR video system runs in realtime and processes 120 frames per second, in groups of three frames or four frames (e.g. a set of four differently exposed images captured every thirtieth of a second). The processing method, for implementation on FPGAs (Field Programmable Gate Arrays), achieves a realtime performance for creating HDR video using our novel compositing methods, and runs on a miniature selfcontained battery-operated head-worn circuit board, without the need for a host computer. The result is an essentially selfcontained miniaturizable hardware HDR camera system that could be built into smaller eyeglass frames, for use in various wearable computing and mediated / aug-mediated reality applications, as well as to help people see better in their everyday lives

    Analyzing Analytics

    No full text
    corecore