995 research outputs found

    Experimental evaluation of UWB indoor positioning for indoor track cycling

    Get PDF
    Accurate radio frequency (RF)-based indoor localization systems are more and more applied during sports. The most accurate RF-based localization systems use ultra-wideband (UWB) technology; this is why this technology is the most prevalent. UWB positioning systems allow for an in-depth analysis of the performance of athletes during training and competition. There is no research available that investigates the feasibility of UWB technology for indoor track cycling. In this paper, we investigate the optimal position to mount the UWB hardware for that specific use case. Different positions on the bicycle and cyclist were evaluated based on accuracy, received power level, line-of-sight, maximum communication range, and comfort. Next to this, the energy consumption of our UWB system was evaluated. We found that the optimal hardware position was the lower back, with a median ranging error of 22 cm (infrastructure hardware placed at 2.3 m). The energy consumption of our UWB system is also taken into account. Applied to our setup with the hardware mounted at the lower back, the maximum communication range varies between 32.6 m and 43.8 m. This shows that UWB localization systems are suitable for indoor positioning of track cyclists

    Frustum PointNets for 3D Object Detection from RGB-D Data

    Full text link
    In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.Comment: 15 pages, 12 figures, 14 table

    Understanding the Expenditure and Recovery of Anaerobic Work Capacity Using Noninvasive Sensors

    Get PDF
    The objective of this research is to advance the understanding of human performance to allow for optimized efforts on specific tasks. This is accomplished by 1) understanding the expenditure and recovery of Anaerobic Work Capacity (AWC) as related to the Critical Power (CP) of a human, and 2) determining if and how a case for an energy-management system to optimize energy expenditure and recovery can be made in real-time using noninvasive sensors. As humans exert energy, the body converts fuel into mechanical power through both aerobic and anaerobic energy systems. The mechanical power produced can be measured through the use of a cycle ergometer and the use of the energy systems can be measured by observing biological artifacts with sensors. There is a Critical Power level that a human can theoretically operate at indefinitely and there is a well-established theory in the literature to predict the depletion of a human’s finite Anaerobic Work Capacity based on this Critical Power. The literature however lacks a robust model for understanding the recovery of the Anaerobic Work Capacity. Because of this, a cycling study was conducted with ten regularly-exercising subjects (9 male, 1 female aged 23-44). First, the CP and AWC of the subjects were determined by a 3-minute all-out intensity cycling test. The subjects performed several interval protocols to exhaustion with recovery intervals to quantify how much AWC was recovered in each interval. Results: It was determined that sub-Critical Power recovery is not proportional to above-Critical Power expenditure. The amount of AWC recovered is influenced more by the power level held during recovery than the amount of time spent in recovery. The following conclusions are discussed in this thesis: 1) relationships between measurable biological artifacts and biological processes that are proven to exist in the literature; 2) expenditure and recovery of Anaerobic Work Capacity; 3) methods to use real-time, noninvasive sensor data to determine the status of human work capacity; and 4) how the results can be used in a human-in-the-loop feedback control system to optimize performance for a given task

    What Can Help Pedestrian Detection?

    Full text link
    Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.Comment: Accepted to IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 201

    Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks

    Full text link
    A detailed environment perception is a crucial component of automated vehicles. However, to deal with the amount of perceived information, we also require segmentation strategies. Based on a grid map environment representation, well-suited for sensor fusion, free-space estimation and machine learning, we detect and classify objects using deep convolutional neural networks. As input for our networks we use a multi-layer grid map efficiently encoding 3D range sensor information. The inference output consists of a list of rotated bounding boxes with associated semantic classes. We conduct extensive ablation studies, highlight important design considerations when using grid maps and evaluate our models on the KITTI Bird's Eye View benchmark. Qualitative and quantitative benchmark results show that we achieve robust detection and state of the art accuracy solely using top-view grid maps from range sensor data.Comment: 6 pages, 4 tables, 4 figure
    • …
    corecore