3,016 research outputs found

    Towards the development of a smart flying sensor: illustration in the field of precision agriculture

    Get PDF
    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology

    Apollo experience report guidance and control systems: Primary guidance, navigation, and control system development

    Get PDF
    The primary guidance, navigation, and control systems for both the lunar module and the command module are described. Development of the Apollo primary guidance systems is traced from adaptation of the Polaris Mark II system through evolution from Block I to Block II configurations; the discussion includes design concepts used, test and qualification programs performed, and major problems encountered. The major subsystems (inertial, computer, and optical) are covered. Separate sections on the inertial components (gyroscopes and accelerometers) are presented because these components represent a major contribution to the success of the primary guidance, navigation, and control system

    MirrorGen Wearable Gesture Recognition using Synthetic Videos

    Get PDF
    abstract: In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of extensive datasets and the nature of the Inertial Measurement Unit (IMU) data, there are difficulties in applying deep learning techniques to them. Although many machine learning models have good accuracy, most of them assume that training data is available for every user while other works that do not require user data have lower accuracies. MirrorGen is a technique which uses wearable sensor data and generates synthetic videos using hand movements and it mitigates the traditional challenges of vision based recognition such as occlusion, lighting restrictions, lack of viewpoint variations, and environmental noise. In addition, MirrorGen allows for user-independent recognition involving minimal human effort during data collection. It also helps leverage the advances in vision-based recognition by using various techniques like optical flow extraction, 3D convolution. Projecting the orientation (IMU) information to a video helps in gaining position information of the hands. To validate these claims, we perform entropy analysis on various configurations such as raw data, stick model, hand model and real video. Human hand model is found to have an optimal entropy that helps in achieving user independent recognition. It also serves as a pervasive option as opposed to a video-based recognition. The average user independent recognition accuracy of 99.03% was achieved for a sign language dataset with 59 different users, 20 different signs with 20 repetitions each for a total of 23k training instances. Moreover, synthetic videos can be used to augment real videos to improve recognition accuracy.Dissertation/ThesisMasters Thesis Computer Science 201

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    A meta-learning algorithm for respiratory flow prediction from FBG-based wearables in unrestrained conditions

    Get PDF
    The continuous monitoring of an individual's breathing can be an instrument for the assessment and enhancement of human wellness. Specific respiratory features are unique markers of the deterioration of a health condition, the onset of a disease, fatigue and stressful circumstances. The early and reliable prediction of high-risk situations can result in the implementation of appropriate intervention strategies that might be lifesaving. Hence, smart wearables for the monitoring of continuous breathing have recently been attracting the interest of many researchers and companies. However, most of the existing approaches do not provide comprehensive respiratory information. For this reason, a meta-learning algorithm based on LSTM neural networks for inferring the respiratory flow from a wearable system embedding FBG sensors and inertial units is herein proposed. Different conventional machine learning approaches were implemented as well to ultimately compare the results. The meta-learning algorithm turned out to be the most accurate in predicting respiratory flow when new subjects are considered. Furthermore, the LSTM model memory capability has been proven to be advantageous for capturing relevant aspects of the breathing pattern. The algorithms were tested under different conditions, both static and dynamic, and with more unobtrusive device configurations. The meta-learning results demonstrated that a short one-time calibration may provide subject-specific models which predict the respiratory flow with high accuracy, even when the number of sensors is reduced. Flow RMS errors on the test set ranged from 22.03 L/min, when the minimum number of sensors was considered, to 9.97 L/min for the complete setting (target flow range: 69.231 Â± 21.477 L/min). The correlation coefficient r between the target and the predicted flow changed accordingly, being higher (r = 0.9) for the most comprehensive and heterogeneous wearable device configuration. Similar results were achieved even with simpler settings which included the thoracic sensors (r ranging from 0.84 to 0.88; test flow RMSE = 10.99 L/min, when exclusively using the thoracic FBGs). The further estimation of respiratory parameters, i.e., rate and volume, with low errors across different breathing behaviors and postures proved the potential of such approach. These findings lay the foundation for the implementation of reliable custom solutions and more sophisticated artificial intelligence-based algorithms for daily life health-related applications
    • …
    corecore