46 research outputs found

    Vision-State Fusion: Improving Deep Neural Networks for Autonomous Robotics

    Full text link
    Vision-based perception tasks fulfill a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatics maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high precision surgery. Most control-oriented and egocentric perception problems are commonly solved by taking advantage of the robot state estimation as an auxiliary input, particularly when artificial intelligence comes into the picture. In this work, we propose to apply a similar approach for the first time - to the best of our knowledge - to allocentric perception tasks, where the target variables refer to an external subject. We prove how our general and intuitive methodology improves the regression performance of deep convolutional neural networks (CNNs) with ambiguous problems such as the allocentric 3D pose estimation. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R2 metric up to +0.514 compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous pocket-sized UAV in the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN.Comment: 8 pages, 8 figure

    Body composition and morphological assessment of nutritional status in adults : a review of anthropometric variables

    Get PDF
    This document is the Accepted Manuscript version of the following article: A. M. Madden, and S. Smith, ‘Body composition and morphological assessment of nutritional status in adults: a review of anthropometric variables’, Journal of Human Nutrition and Dietetics, vol. 29 (1): 7-25, February 2016, DOI: https://doi.org/10.1111/jhn.12278 . This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.Evaluation of body composition is an important part of assessing nutritional status and provides prognostically useful data and opportunity to monitor the effects of nutrition-related disease progression and nutritional intervention. The aim of this narrative review is to critically evaluate body composition methodology in adults, focusing on anthropometric variables. The variables considered include height, weight, body mass index and alternative indices, trunk measurements (waist and hip circumferences and sagittal abdominal diameter) and limb measurements (mid-upper arm and calf circumferences) and skinfold thickness. The importance of adhering to a defined measurement protocol, checking measurement error and the need to interpret measurements using appropriate population-specific cut-off values to identify health risks were identified. Selecting the optimum method of assessing body composition using anthropometry depends on the purpose, i.e. evaluating obesity or undernutrition, and requires practitioners to have a good understanding of both practical and theoretical limitations and to wisely interpret the results.Peer reviewe

    Vision-state Fusion: Improving Deep Neural Networks for Autonomous Robotics

    No full text
    Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R2 regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.ISSN:0921-0296ISSN:1573-040

    A Deep Learning-Based Face Mask Detector for Autonomous Nano-Drones (Student Abstract)

    No full text
    We present a deep neural network (DNN) for visually classifying whether a person is wearing a protective face mask. Our DNN can be deployed on a resource-limited, sub-10-cm nano-drone: this robotic platform is an ideal candidate to fly in human proximity and perform ubiquitous visual perception safely. This paper describes our pipeline, starting from the dataset collection; the selection and training of a full-precision (i.e., float32) DNN; a quantization phase (i.e., int8), enabling the DNN's deployment on a parallel ultra-low power (PULP) system-on-chip aboard our target nano-drone. Results demonstrate the efficacy of our pipeline with a mean area under the ROC curve score of 0.81, which drops by only ~2% when quantized to 8-bit for deployment

    Deep Neural Network Architecture Search for Accurate Visual Pose Estimation aboard Nano-UAVs

    No full text
    Miniaturized autonomous unmanned aerial vehicles (UAVs) are an emerging and trending topic. With their form factor as big as the palm of one hand, they can reach spots otherwise inaccessible to bigger robots and safely operate in human surroundings. The simple electronics aboard such robots (sub-100 mW) make them particularly cheap and attractive but pose significant challenges in enabling onboard sophisticated intelligence. In this work, we leverage a novel neural architecture search (NAS) technique to automatically identify several Pareto-optimal convolutional neural networks (CNNs) for a visual pose estimation task. Our work demonstrates how reallife and field-tested robotics applications can concretely leverage NAS technologies to automatically and efficiently optimize CNNs for the specific hardware constraints of small UAVs. We deploy several NAS-optimized CNNs and run them in closed-loop aboard a 27-g Crazyflie nano-UAV equipped with a parallel ultra-low power System-on-Chip. Our results improve the State-of-the-Art by reducing the in-field control error of 32% while achieving a real-time onboard inference-rate of ~10Hz@10mW and ~50Hz@90mW
    corecore