53,891 research outputs found

    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

    Full text link
    New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called "events") and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e., rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table

    Haptic Feedback for Injecting Biological Cells using Miniature Compliant Mechanisms

    Get PDF
    Abstract We present a real-time haptics-aided injection technique for biological cells using miniature compliant mechanisms. Our system consists of a haptic robot operated by a human hand, an XYZ stage for micro-positioning, a camera for image capture, and a polydimethylsiloxane (PDMS) miniature compliant device that serves the dual purpose of an injecting tool and a force-sensor. In contrast to existing haptics-based micromanipulation techniques where an external force sensor is used, we use visually captured displacements of the compliant mechanism to compute the applied and reaction forces. The human hand can feel the magnified manipulation force through the haptic device in real-time while the motion of the human hand is replicated on the mechanism side. The images are captured using a camera at the rate of 30 frames per second for extracting the displacement data. This is used to compute the forces at the rate of 30 Hz. The force computed in this manner is sent at the rate of 1000 Hz to ensure stable haptic interaction. The haptic cell-manipulation system was tested by injecting into a zebrafish egg cell after validating the technique at a size larger than that of the cell

    Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts

    Get PDF
    The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing

    Ambient health monitoring: the smartphone as a body sensor network component

    Get PDF
    Inertial measurement units used in commercial body sensor networks (e.g. animation suits) are inefficient, difficult to use and expensive when adapted for movement science applications concerning medical and sports science. However, due to advances in micro-electro mechanical sensors, these inertial sensors have become ubiquitous in mobile computing technologies such as smartphones. Smartphones generally use inertial sensors to enhance the interface usability. This paper investigates the use of a smartphone’s inertial sensing capability as a component in body sensor networks. It discusses several topics centered on inertial sensing: body sensor networks, smartphone networks and a prototype framework for integrating these and other heterogeneous devices. The proposed solution is a smartphone application that gathers, processes and filters sensor data for the purpose of tracking physical activity. All networking functionality is achieved by Skeletrix, a framework for gathering and organizing motion data in online repositories that are conveniently accessible to researchers, healthcare professionals and medical care workers

    Digitisation of a moving assembly operation using multiple depth imaging sensors

    Get PDF
    Several manufacturing operations continue to be manual even in today’s highly automated industry because the complexity of such operations makes them heavily reliant on human skills, intellect and experience. This work aims to aid the automation of one such operation, the wheel loading operation on the trim and final moving assembly line in automotive production. It proposes a new method that uses multiple low-cost depth imaging sensors, commonly used in gaming, to acquire and digitise key shopfloor data associated with the operation, such as motion characteristics of the vehicle body on the moving conveyor line and the angular positions of alignment features of the parts to be assembled, in order to inform an intelligent automation solution. Experiments are conducted to test the performance of the proposed method across various assembly conditions, and the results are validated against an industry standard method using laser tracking. Some disadvantages of the method are discussed, and suggestions for improvements are suggested. The proposed method has the potential to be adopted to enable the automation of a wide range of moving assembly operations in multiple sectors of the manufacturing industry

    Towards a Practical Pedestrian Distraction Detection Framework using Wearables

    Full text link
    Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices
    • …
    corecore