75,528 research outputs found

    A study to trial the use of inertial non-optical motion capture for ergonomic analysis of manufacturing work

    Get PDF
    It is going to be increasingly important for manufacturing system designers to incorporate human activity data and ergonomic analysis with other performance data in digital design modelling and system monitoring. However, traditional methods of capturing human activity data are not sufficiently accurate to meet the needs of digitised data analysis; qualitative data are subject to bias and imprecision, and optically derived data are hindered by occlusions caused by structures or other people in a working environment. Therefore, to meet contemporary needs for more accurate and objective data, inertial non-optical methods of measurement appear to offer a solution. This article describes a case study conducted within the aerospace manufacturing industry, where data on the human activities involved in aircraft wing system installations was first collected via traditional ethnographic methods and found to have limited accuracy and suitability for digital modelling, but similar human activity data subsequently collected using an automatic non-optical motion capture system in a more controlled environment showed better suitability. Results demonstrate the potential benefits of applying not only the inertial non-optical method in future digital modelling and performance monitoring but also the value of continuing to include qualitative analysis for richer interpretation of important explanatory factors

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Real-time motion data annotation via action string

    Get PDF
    Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method

    Unconstrained video monitoring of breathing behavior and application to diagnosis of sleep apnea

    Get PDF
    This paper presents a new real-time automated infrared video monitoring technique for detection of breathing anomalies, and its application in the diagnosis of obstructive sleep apnea. We introduce a novel motion model to detect subtle, cyclical breathing signals from video, a new 3-D unsupervised self-adaptive breathing template to learn individuals' normal breathing patterns online, and a robust action classification method to recognize abnormal breathing activities and limb movements. This technique avoids imposing positional constraints on the patient, allowing patients to sleep on their back or side, with or without facing the camera, fully or partially occluded by the bed clothes. Moreover, shallow and abdominal breathing patterns do not adversely affect the performance of the method, and it is insensitive to environmental settings such as infrared lighting levels and camera view angles. The experimental results show that the technique achieves high accuracy (94% for the clinical data) in recognizing apnea episodes and body movements and is robust to various occlusion levels, body poses, body movements (i.e., minor head movement, limb movement, body rotation, and slight torso movement), and breathing behavior (e.g., shallow versus heavy breathing, mouth breathing, chest breathing, and abdominal breathing). © 2013 IEEE

    Radar High Resolution Range & Micro-Doppler Analysis of Human Motions

    Get PDF
    In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1]. Previous papers have mostly focused on ID micro-Doppler analysis [2-4]. In this paper, we propose to emphasize the analysis of such "non stationary targets" using a 2D imaging space, using both the micro-Doppler and a high range resolution analysis. As in 2D-ISAR imaging, range separation enables us to better discriminate the various effects caused by the time varying reflectors. We will focus our study on human motion. We will see how micro-Doppler signature can be used to extract information on pedestrians gait. We will show examples on simulated and experimental data

    Experimental Validation of Contact Dynamics for In-Hand Manipulation

    Full text link
    This paper evaluates state-of-the-art contact models at predicting the motions and forces involved in simple in-hand robotic manipulations. In particular it focuses on three primitive actions --linear sliding, pivoting, and rolling-- that involve contacts between a gripper, a rigid object, and their environment. The evaluation is done through thousands of controlled experiments designed to capture the motion of object and gripper, and all contact forces and torques at 250Hz. We demonstrate that a contact modeling approach based on Coulomb's friction law and maximum energy principle is effective at reasoning about interaction to first order, but limited for making accurate predictions. We attribute the major limitations to 1) the non-uniqueness of force resolution inherent to grasps with multiple hard contacts of complex geometries, 2) unmodeled dynamics due to contact compliance, and 3) unmodeled geometries dueto manufacturing defects.Comment: International Symposium on Experimental Robotics, ISER 2016, Tokyo, Japa

    Suppression of biodynamic interference in head-tracked teleoperation

    Get PDF
    The utility of helmet-tracked sights to provide pointing commands for teleoperation of cameras, lasers, or antennas in aircraft is degraded by the presence of uncommanded, involuntary heat motion, referred to as biodynamic interference. This interference limits the achievable precision required in pointing tasks. The noise contributions due to biodynamic interference consists of an additive component which is correlated with aircraft vibration and an uncorrelated, nonadditive component, referred to as remnant. An experimental simulation study is described which investigated the improvements achievable in pointing and tracking precision using dynamic display shifting in the helmet-mounted display. The experiment was conducted in a six degree of freedom motion base simulator with an emulated helmet-mounted display. Highly experienced pilot subjects performed precision head-pointing tasks while manually flying a visual flight-path tracking task. Four schemes using adaptive and low-pass filtering of the head motion were evaluated to determine their effects on task performance and pilot workload in the presence of whole-body vibration characteristic of helicopter flight. The results indicate that, for tracking tasks involving continuously moving targets, improvements of up to 70 percent can be achieved in percent on-target dwelling time and of up to 35 percent in rms tracking error, with the adaptive plus low-pass filter configuration. The results with the same filter configuration for the task of capturing randomly-positioned, stationary targets show an increase of up to 340 percent in the number of targets captured and an improvement of up to 24 percent in the average capture time. The adaptive plus low-pass filter combination was considered to exhibit the best overall display dynamics by each of the subjects

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    Motion Imitation Based on Sparsely Sampled Correspondence

    Full text link
    Existing techniques for motion imitation often suffer a certain level of latency due to their computational overhead or a large set of correspondence samples to search. To achieve real-time imitation with small latency, we present a framework in this paper to reconstruct motion on humanoids based on sparsely sampled correspondence. The imitation problem is formulated as finding the projection of a point from the configuration space of a human's poses into the configuration space of a humanoid. An optimal projection is defined as the one that minimizes a back-projected deviation among a group of candidates, which can be determined in a very efficient way. Benefited from this formulation, effective projections can be obtained by using sparse correspondence. Methods for generating these sparse correspondence samples have also been introduced. Our method is evaluated by applying the human's motion captured by a RGB-D sensor to a humanoid in real-time. Continuous motion can be realized and used in the example application of tele-operation.Comment: 8 pages, 8 figures, technical repor
    • 

    corecore