71,306 research outputs found

    Wearable Conductive Fiber Sensors for Multi-Axis Human Joint Angle Measurements

    Get PDF
    BACKGROUND: The practice of continuous, long-term monitoring of human joint motion is one that finds many applications, especially in the medical and rehabilitation fields. There is a lack of acceptable devices available to perform such measurements in the field in a reliable and non-intrusive way over a long period of time. The purpose of this study was therefore to develop such a wearable joint monitoring sensor capable of continuous, day-to-day monitoring. METHODS: A novel technique of incorporating conductive fibers into flexible, skin-tight fabrics surrounding a joint is developed. Resistance changes across these conductive fibers are measured, and directly related to specific single or multi-axis joint angles through the use of a non-linear predictor after an initial, one-time calibration. Because these sensors are intended for multiple uses, an automated registration algorithm has been devised using a sensitivity template matched to an array of sensors spanning the joints of interest. In this way, a sensor array can be taken off and put back on an individual for multiple uses, with the sensors automatically calibrating themselves each time. RESULTS: The wearable sensors designed are comfortable, and acceptable for long-term wear in everyday settings. Results have shown the feasibility of this type of sensor, with accurate measurements of joint motion for both a single-axis knee joint and a double axis hip joint when compared to a standard goniometer used to measure joint angles. Self-registration of the sensors was found to be possible with only a few simple motions by the patient. CONCLUSION: After preliminary experiments involving a pants sensing garment for lower body monitoring, it has been seen that this methodology is effective for monitoring joint motion of the hip and knee. This design therefore produces a robust, comfortable, truly wearable joint monitoring device

    Ca2^2Lib: Simple and Accurate LiDAR-RGB Calibration using Small Common Markers

    Full text link
    In many fields of robotics, knowing the relative position and orientation between two sensors is a mandatory precondition to operate with multiple sensing modalities. In this context, the pair LiDAR-RGB cameras offer complementary features: LiDARs yield sparse high quality range measurements, while RGB cameras provide a dense color measurement of the environment. Existing techniques often rely either on complex calibration targets that are expensive to obtain, or extracted virtual correspondences that can hinder the estimate's accuracy. In this paper we address the problem of LiDAR-RGB calibration using typical calibration patterns (i.e. A3 chessboard) with minimal human intervention. Our approach exploits the planarity of the target to find correspondences between the sensors measurements, leading to features that are robust to LiDAR noise. Moreover, we estimate a solution by solving a joint non-linear optimization problem. We validated our approach by carrying on quantitative and comparative experiments with other state-of-the-art approaches. Our results show that our simple schema performs on par or better than other approches using complex calibration targets. Finally, we release an open-source C++ implementation at \url{https://github.com/srrg-sapienza/ca2lib}Comment: 7 pages, 10 figure

    Dataglove Measurement of Joint Angles in Sign Language Handshapes

    Get PDF
    In sign language research, we understand little about articulatory factors involved in shaping phonemic boundaries or the amount (and articulatory nature) of acceptable phonetic variation between handshapes. To date, there exists no comprehensive analysis of handshape based on the quantitative measurement of joint angles during sign production. The purpose of our work is to develop a methodology for collecting and visualizing quantitative handshape data in an attempt to better understand how handshapes are produced at a phonetic level. In this pursuit, we seek to quantify the flexion and abduction angles of the finger joints using a commercial data glove (CyberGlove; Immersion Inc.). We present calibration procedures used to convert raw glove signals into joint angles. We then implement those procedures and evaluate their ability to accurately predict joint angle. Finally, we provide examples of how our recording techniques might inform current research questions

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Novel smart glove technology as a biomechanical monitoring tool

    Get PDF
    Developments in Virtual Reality (VR) technology and its overall market have been occurring since the 1960s when Ivan Sutherland created the world’s first tracked head-mounted display (HMD) – a goggle type head gear. In society today, consumers are expecting a more immersive experience and associated tools to bridge the cyber-physical divide. This paper presents the development of a next generation smart glove microsystem to facilitate Human Computer Interaction through the integration of sensors, processors and wireless technology. The objective of the glove is to measure the range of hand joint movements, in real time and empirically in a quantitative manner. This includes accurate measurement of flexion, extension, adduction and abduction of the metacarpophalangeal (MCP), Proximal interphalangeal (PIP) and Distal interphalangeal (DIP) joints of the fingers and thumb in degrees, together with thumb-index web space movement. This system enables full real-time monitoring of complex hand movements. Commercially available gloves are not fitted with sufficient sensors for full data capture, and require calibration for each glove wearer. Unlike these current state-of-the-art data gloves, the UU / Tyndall Inertial Measurement Unit (IMU) glove uses a combination of novel stretchable substrate material and 9 degree of freedom (DOF) inertial sensors in conjunction with complex data analytics to detect joint movement. Our novel IMU data glove requires minimal calibration and is therefore particularly suited to multiple application domains such as Human Computer interfacing, Virtual reality, the healthcare environment

    Convex Optimization Approaches for Blind Sensor Calibration using Sparsity

    Get PDF
    We investigate a compressive sensing framework in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on blind calibration, using measures performed on multiple unknown (but sparse) signals and formulate the joint recovery of the gains and the sparse signals as a convex optimization problem. We divide this problem in 3 subproblems with different conditions on the gains, specifially (i) gains with different amplitude and the same phase, (ii) gains with the same amplitude and different phase and (iii) gains with different amplitude and phase. In order to solve the first case, we propose an extension to the basis pursuit optimization which can estimate the unknown gains along with the unknown sparse signals. For the second case, we formulate a quadratic approach that eliminates the unknown phase shifts and retrieves the unknown sparse signals. An alternative form of this approach is also formulated to reduce complexity and memory requirements and provide scalability with respect to the number of input signals. Finally for the third case, we propose a formulation that combines the earlier two approaches to solve the problem. The performance of the proposed algorithms is investigated extensively through numerical simulations, which demonstrates that simultaneous signal recovery and calibration is possible with convex methods when sufficiently many (unknown, but sparse) calibrating signals are provided

    Extrinsic Calibration and Ego-Motion Estimation for Mobile Multi-Sensor Systems

    Get PDF
    Autonomous robots and vehicles are often equipped with multiple sensors to perform vital tasks such as localization or mapping. The joint system of various sensors with different sensing modalities can often provide better localization or mapping results than individual sensor alone in terms of accuracy or completeness. However, to enable improved performance, two important challenges have to be addressed when dealing with multi-sensor systems. Firstly, how to accurately determine the spatial relationship between individual sensor on the robot? This is a vital task known as extrinsic calibration. Without this calibration information, measurements from different sensors cannot be fused. Secondly, how to combine data from multiple sensors to correct for the deficiencies of each sensor, and thus, provides better estimations? This is another important task known as data fusion. The core of this thesis is to provide answers to these two questions. We cover, in the first part of the thesis, aspects related to improving the extrinsic calibration accuracy, and present, in the second part, novel data fusion algorithms designed to address the ego-motion estimation problem using data from a laser scanner and a monocular camera. In the extrinsic calibration part, we contribute by revealing and quantifying the relative calibration accuracies of three common types of calibration methods, so as to offer an insight into choosing the best calibration method when multiple options are available. Following that, we propose an optimization approach for solving common motion-based calibration problems. By exploiting the Gauss-Helmert model, our approach is more accurate and robust than classical least squares model. In the data fusion part, we focus on camera-laser data fusion and contribute with two new ego-motion estimation algorithms that combine complementary information from a laser scanner and a monocular camera. The first algorithm utilizes camera image information to guide the laser scan-matching. It can provide accurate motion estimates and yet can work in general conditions without requiring a field-of-view overlap between the camera and laser scanner, nor an initial guess of the motion parameters. The second algorithm combines the camera and the laser scanner information in a direct way, assuming the field-of-view overlap between the sensors is substantial. By maximizing the information usage of both the sparse laser point cloud and the dense image, the second algorithm is able to achieve state-of-the-art estimation accuracy. Experimental results confirm that both algorithms offer excellent alternatives to state-of-the-art camera-laser ego-motion estimation algorithms

    Real-time marker-less multi-person 3D pose estimation in RGB-Depth camera networks

    Get PDF
    This paper proposes a novel system to estimate and track the 3D poses of multiple persons in calibrated RGB-Depth camera networks. The multi-view 3D pose of each person is computed by a central node which receives the single-view outcomes from each camera of the network. Each single-view outcome is computed by using a CNN for 2D pose estimation and extending the resulting skeletons to 3D by means of the sensor depth. The proposed system is marker-less, multi-person, independent of background and does not make any assumption on people appearance and initial pose. The system provides real-time outcomes, thus being perfectly suited for applications requiring user interaction. Experimental results show the effectiveness of this work with respect to a baseline multi-view approach in different scenarios. To foster research and applications based on this work, we released the source code in OpenPTrack, an open source project for RGB-D people tracking.Comment: Submitted to the 2018 IEEE International Conference on Robotics and Automatio
    • 

    corecore