1,389 research outputs found

    Measuring Digital System Latency from Sensing to Actuation at Continuous 1 Millisecond Resolution

    Get PDF
    This thesis describes a new method for measuring the end-to-end latency between sensing and actuation in a digital computing system. Compared to previous work, which generally measures the latency at 16-33 ms intervals or at discrete events separated by hundreds of ms, our new method measures the latency continuously at 1 millisecond resolution. This allows for the observation of variations in latency over sub 1 s periods, instead of relying upon averages of measurements. We have applied our method to two systems, the ?rst using a camera for sensing and an LCD monitor for actuation, and the second using an orientation sensor for sensing and a motor for actuation. Our results show two interesting ?ndings. First, a cyclical variation in latency can be seen based upon the relative rates of the sensor and actuator clocks and bu?er times; for the components we tested the variation was in the range of 15-50 Hz with a magnitude of 10-20 ms. Second, orientation sensor error can look like a variation in latency; for the sensor we tested the variation was in the range of 0.5-1.0 Hz with a magnitude of 20-100 ms. Both of these ?ndings have implications for robotics and virtual reality systems. In particular, it is possible that the variation in apparent latency caused by orientation sensor error may have some relation to \u27simulator sickness\u27

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

    Get PDF
    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374

    Mist and Edge Computing Cyber-Physical Human-Centered Systems for Industry 5.0: A Cost-Effective IoT Thermal Imaging Safety System

    Get PDF
    While many companies worldwide are still striving to adjust to Industry 4.0 principles, the transition to Industry 5.0 is already underway. Under such a paradigm, Cyber-Physical Human-centered Systems (CPHSs) have emerged to leverage operator capabilities in order to meet the goals of complex manufacturing systems towards human-centricity, resilience and sustainability. This article first describes the essential concepts for the development of Industry 5.0 CPHSs and then analyzes the latest CPHSs, identifying their main design requirements and key implementation components. Moreover, the major challenges for the development of such CPHSs are outlined. Next, to illustrate the previously described concepts, a real-world Industry 5.0 CPHS is presented. Such a CPHS enables increased operator safety and operation tracking in manufacturing processes that rely on collaborative robots and heavy machinery. Specifically, the proposed use case consists of a workshop where a smarter use of resources is required, and human proximity detection determines when machinery should be working or not in order to avoid incidents or accidents involving such machinery. The proposed CPHS makes use of a hybrid edge computing architecture with smart mist computing nodes that processes thermal images and reacts to prevent industrial safety issues. The performed experiments show that, in the selected real-world scenario, the developed CPHS algorithms are able to detect human presence with low-power devices (with a Raspberry Pi 3B) in a fast and accurate way (in less than 10 ms with a 97.04% accuracy), thus being an effective solution that can be integrated into many Industry 5.0 applications. Finally, this article provides specific guidelines that will help future developers and managers to overcome the challenges that will arise when deploying the next generation of CPHSs for smart and sustainable manufacturing.Comment: 32 page

    Gamified Music Learning System with VR Force Feedback for Rehabilitation

    Get PDF
    Many conditions cause loss of coordination and motor capabilities in the extremities. One such condition is stroke, which affects approximately 15 million people worldwide each year. [1] Many robotic systems have been developed to assist in the physical and neurological rehabilitation of patients who have suffered a stroke. As a result of this project an actuator to be used for hand rehabilitation using visual processing and Bowden cables was designed. This project aims to use the design of the actuator combined with gamification elements to create an interface to be used in future robotic rehabilitation systems as well as address the compliance problem found in rehabilitation

    Gamified Music Learning System with VR Force Feedback for

    Get PDF
    Many conditions cause loss of coordination and motor capabilities in the extremities. One such condition is stroke, which affects approximately 15 million people worldwide each year. Many robotic systems have been developed to assist in the physical and neurological rehabilitation of patients who have suffered a stroke. As a result of this project an actuator, to be used for hand rehabilitation, by means of visual processing and Bowden cables, was designed. This project aims to use the design of the actuator combined with gamification elements to create an interface to be used in future robotic rehabilitation systems as well as address the compliance problem found in rehabilitation
    • …
    corecore