131 research outputs found

    Enhanced tracking system based on micro inertial measurements unit to measure sensorimotor responses in pigeons

    Get PDF
    The ability to orientate and navigate is critically important for the survival of all migratory birds and other animals. Progress in understanding the mechanisms underlying these capabilities and, in particular, the importance of a sensitivity to the Earth’s magnetic field has, thus far, been constrained by the limited number of techniques available for the analysis of often complex behavioural responses. Methods used to track the movements of animals, such as birds, have varied depending on the degree of accuracy required. Most conventional approaches involve the use of a camera for recording and then measuring an animal's head movements in response to a variety of external stimuli, such as changes in magnetic fields. However, video tracking analysis (VTA) will generally only provide a 2D tracking of head angle. Moreover, such a video analysis can only provide information about movements when the head is in view of the camera. In order to overcome these limitations, the novel invention reported here utilises a lightweight (<10g) Inertial Measurement Unit (IMU), positioned on the head of a homing pigeon, which contains a sensor with tri-axial orthogonal accelerometers, gyroscopes, and magnetometers. This highly compact (20.3×12.7×3 mm) system, can be programmed and calibrated to provide measurements of the three rotational angles (roll, pitch and yaw) simultaneously, eliminating any drift, i.e. the movement of the pigeon's head is determined by detecting and estimating the directions of motion at all angles (even those outside the defined areas of tracking). Using an existing VTA approach as a baseline for comparison, it is demonstrated IMU technology can comprehensively track a pigeon’s normal head movements with greater precision and in all 3 axes

    Review of Wearable Devices and Data Collection Considerations for Connected Health

    Get PDF
    Wearable sensor technology has gradually extended its usability into a wide range of well-known applications. Wearable sensors can typically assess and quantify the wearer’s physiology and are commonly employed for human activity detection and quantified self-assessment. Wearable sensors are increasingly utilised to monitor patient health, rapidly assist with disease diagnosis, and help predict and often improve patient outcomes. Clinicians use various self-report questionnaires and well-known tests to report patient symptoms and assess their functional ability. These assessments are time consuming and costly and depend on subjective patient recall. Moreover, measurements may not accurately demonstrate the patient’s functional ability whilst at home. Wearable sensors can be used to detect and quantify specific movements in different applications. The volume of data collected by wearable sensors during long-term assessment of ambulatory movement can become immense in tuple size. This paper discusses current techniques used to track and record various human body movements, as well as techniques used to measure activity and sleep from long-term data collected by wearable technology devices

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    INERTIAL MOTION CAPTURE SYSTEM FOR BIOMECHANICAL ANALYSIS IN PRESSURE SUITS

    Get PDF
    A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced. The focus of these evaluations was to quantify the reduction in mobility when operating in any of the evaluated pressure suits. This data should be of value in defining new low cost alternatives for pressure suit performance verification and evaluation. This work demonstrates that the BPMS technology is a viable alternative or companion to optical motion capture; while BPMS is the first motion capture system that has been designed specifically to measure the kinematics of a human in a pressure suit, its capabilities are not constrained to just being a measurement tool. The last section of the manuscript is devoted to future possible uses for the system, with a specific focus on pressure suit applications such in the use of BPMS as a master control interface for robot teleoperation, as well as an input interface for future robotically augmented pressure suits

    Fixed-wing MAV attitude stability in atmospheric turbulence, part 1: Suitability of conventional sensors

    Get PDF
    Fixed-wing Micro-Aerial Vehicles (MAVs) need effective sensors that can rapidly detect turbulence induced motion perturbations. Current MAV attitude control systems rely on inertial sensors. These systems can be described as reactive; detecting the disturbance only after the aircraft has responded to the disturbing phenomena. In this part of the paper, the current state of the art in reactive attitude sensing for fixed-wing MAVs are reviewed. A scheme for classifying the range of existing and emerging sensing techniques is presented. The features and performance of the sensing approaches are discussed in the context of their application to MAV attitude control systems in turbulent environments. It is found that the use of single sensors is insufficient for MAV control in the presence of turbulence and that potential gains can be realised from multi-sensor systems. A successive paper to be published in this journal will investigate novel attitude sensors which have the potential to improve attitude control of MAVs in Turbulenc

    Variable Vector Countermeasure Suit (V2Suit) for Space Habitation and Exploration

    Get PDF
    The "Variable Vector Countermeasure Suit (V2Suit) for Space Habitation and Exploration" is a visionary system concept that will revolutionize space missions by providing a platform for integrating sensors and actuators with daily astronaut intravehicular activities to improve human health and performance. The V2Suit uses control moment gyroscopes (CMGs) within a miniaturized module placed on body segments to provide a "viscous resistance" during movements and a countermeasure to the sensorimotor and musculoskeletal adaptation performance decrements that manifest themselves while living and working in microgravity and during gravitational transitions during long-duration spaceflight, including post-flight recovery and rehabilitation. Through an integrated design, system initialization, and control systems approach the V2Suit is capable of generating this "viscous resistance" along an arbitrarily specified direction of "down." When movements are made, for example, parallel to that "down" direction a resistance is applied, and when the movement is perpendicular to that direction no resistance is applied. The V2Suit proposes to be a countermeasure to this spaceflight-related adaptation and de-conditioning and the unique sensorimotor characteristics associated with living and working in 0-G, which are critical for future long-duration space missions.This NIAC Phase I project focused on detailing several aspects of the V2Suit concept, including human-system integration, system architecture, computer aided design (CAD) modeling, and closed-loop simulation and analysis. In addition, early-stage breadboard prototyping of several aspects of the V2Suit system modules enabled characterization of expected performance and identified areas for further research and development to enable operational implementation of the V2Suit. In particular, potential challenges with integration of commercial-off-the-shelf components were identified. The key enabler for operational use and adoption of the V2Suit is a low-profile body worn form factor that does not interfere with normal, everyday movements and interfaces adequately with the body as to provide the generated gyroscopic torque for the perceptions of movement with a "viscous resistance." These aspects were investigated through mockups using a life-size mannequin, and through body attachment mechanisms on the breadboard prototype. Through the evaluation and investigation of commercially-available components, as well as an identification of desirable form factors, CAD models of the V2Suit modules were developed. These models included all of the required elements and spin motors, flywheel masses, gimbal motors, slip rings, inertial measurement units, motor controllers, and the required mounting brackets/hardware and cabling. The configuration and orientation of the control moment gyroscopes (CMGs) was specified according to results from the modeling, simulation and analysis. Two revisions of the CAD model were investigated through closed-loop simulation of the CMGs, and their ability to generate a resultant reaction force during movement and null undesirable torques due to changes in the direction of the angular momentum vector as a result of the normal body movements

    Addressing the problem of Interaction in fully immersive Virtual Environments: from raw sensor data to effective devices

    Get PDF
    Immersion into Virtual Reality is a perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system with images, sound or other stimuli that provide an engrossing total environment. The use of technological devices such as stereoscopic cameras, head-mounted displays, tracking systems and haptic interfaces allows for user experiences providing a physical feeling of being in a realistic world, and the term “immersion” is a metaphoric use of the experience of submersion applied to representation, fiction or simulation. One of the main peculiarity of fully immersive virtual reality is the enhancing of the simple passive viewing of a virtual environment with the ability to manipulate virtual objects inside it. This Thesis project investigates such interfaces and metaphors for the interaction and the manipulation tasks. In particular, the research activity conducted allowed the design of a thimble-like interface that can be used to recognize in real-time the human hand’s orientation and infer a simplified but effective model of the relative hand’s motion and gesture. Inside the virtual environment, users provided with the developed systems will be therefore able to operate with natural hand gestures in order to interact with the scene; for example, they could perform positioning task by moving, rotating and resizing existent objects, or create new ones from scratch. This approach is particularly suitable when there is the need for the user to operate in a natural way, performing smooth and precise movements. Possible applications of the system to the industry are the immersive design in which the user can perform Computer- Aided Design (CAD) totally immersed in a virtual environment, and the operators training, in which the user can be trained on a 3D model in assembling or disassembling complex mechanical machineries, following predefined sequences. The thesis has been organized around the following project plan: - Collection of the relevant State Of The Art - Evaluation of design choices and alternatives for the interaction hardware - Development of the necessary embedded firmware - Integration of the resulting devices in a complex interaction test-bed - Development of demonstrative applications implementing the device - Implementation of advanced haptic feedbac

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Pushing the limits of inertial motion sensing

    Get PDF
    • …
    corecore