12 research outputs found
Dataset for "Parameter Reduction and Optimisation for Point Cloud and Occupancy Mapping Algorithms"
In the applications of occupancy mapping, the scene to be explored is normally large and objects are of irregular shapes. In this case, it is difficult to evaluate the performance of a mapping approach since the ground truth can hardly be obtained. This dataset aims to provide small measured scenes with ground truths for evaluation purposes. Videos are recorded in .svo files. These files can be opened with the tools provided in ZED SDK which can be downloaded at Stereo Labs website (https://www.stereolabs.com). Images in the videos can be extracted using the API in ZED SDK and the camera parameters can also be accessed by ZED API. Ground truth files are in .ot format. These files can be viewed by octovis package. To install octovis, run 'sudo apt-get install octovis' in Ubuntu. Camera trajectories produced by ORB-SLAM and keyframe poses produced by ORB-SLAM are in .txt files. Most text editors can open files of this type
Dataset for "Parameter Reduction and Optimisation for Point Cloud and Occupancy Mapping Algorithms"
In the applications of occupancy mapping, the scene to be explored is normally large and objects are of irregular shapes. In this case, it is difficult to evaluate the performance of a mapping approach since the ground truth can hardly be obtained. This dataset aims to provide small measured scenes with ground truths for evaluation purposes. Videos are recorded in .svo files. These files can be opened with the tools provided in ZED SDK which can be downloaded at Stereo Labs website (https://www.stereolabs.com). Images in the videos can be extracted using the API in ZED SDK and the camera parameters can also be accessed by ZED API. Ground truth files are in .ot format. These files can be viewed by octovis package. To install octovis, run 'sudo apt-get install octovis' in Ubuntu. Camera trajectories produced by ORB-SLAM and keyframe poses produced by ORB-SLAM are in .txt files. Most text editors can open files of this type
Dataset for "Understanding Freehand Cursorless Pointing Variability and Its Impact on Selection Performance"
This dataset supports the journal entry "Understanding Freehand Cursorless Pointing Variability and Its Impact on Selection Performance" (TOCHI, 2025), containing motion capture data, of the body and hands, captured during a range of pointing gestures from 23 participants. The user study that captured this data systematically explored how target position (3 rows by 5 columns), task focus (Pointing as a Primary Task vs Secondary Task), and user effort (Accurate pointing vs Casual pointing), affect pointing behaviour and performance. The dataset includes: - Motion capture data for each trial (grouped by participant). This contains body landmarks – captured via a markerless motion capture system and finger landmarks – tracked with infrared markers. - Trial Annotations. Metadata for each trial, such as the target position, labels for when pointing occurs, and observed behaviour labels. - Encoded gesture statistics. For each trial, for which a valid pointing gesture could be extracted, an encoding of the gesture performed, derived from the medians for body pose features (e.g. elbow flexion), fatigue measures (e.g. consumed endurance), and rays (e.g. vector and accuracy). - Self-reported user data. Including participant age, hand dominance, and fatigue measures (obtained after completing pointing within each condition). - Code for visualising the trials, including a subset of the rays used in our subsequent analysis, code for generating our encoded gestures, using the motion capture data and annotations, and the script used to perform the analysis over our encoded gestures. This dataset has been provided for two purposes: 1. For further investigation into pointing behaviour and for the development of pointing interaction systems. For this, please refer to the Pointing Dataset section of the README to understand the structure and dataset contents, and the Trial Visualiser section of the README for usage of a script for visualising the motion capture data. 2. For reproduction of data used in the analysis of the accompanying paper (Understanding Freehand Cursorless Pointing Variability and Its Impact on Selection Performance), for which please see Pointing Dataset section of the README to understand the structure and dataset contents, along with the Gesture Encoder and Analysis Script sections of the README for the code used to perform our analysis
Dataset for 'How smart do smart meters need to be?'
This dataset consists of three subsets that represent several variables related to energy consumption in 43 households in the UK: 1) households' internal temperature, CO2 level, gas and electricity consumption before, during and after digital energy feedback phase; 2) energy literacy before and after the digital feedback; 3) user experience data after the digital energy feedback interventions
Repository-Residuum model shape and volume assessment
The dataset includes data that have been presented and discussed in the paper entitled 'Validity and reliability of a novel 3D scanner for assessment of the shape and volume of amputees’ residual limb models', published in PLOS ONE Journal. Objective assessment methods to monitor residual limb volume following lower-limb amputation are required to enhance practitioner-led prosthetic fitting and the data collected in this database include the results and the statistical analysis performed to assess the validity and reliability of the 3D Artec Eva scanner (practical measurement) against a high precision laser 3D scanner (criterion measurement) for the determination of residual limb model shape and volume. Data include results of volumes, cross sectional areas, perimeters, body centre of mass position and sizes of ten different residual limb models (both transtibial and transfemoral)
Dataset for BathRC Ventilation model
This dataset contains: (a) Experimentally measured pressure-flow data from testing of ventilator circuits with a variety of configurations and circuit components. These data are useful in characterising circuit components and validating ventilation circuit models, in particular the BathRC model. (b) A spreadsheet implementing the BathRC model to compute inspiration restrictor requirements for ventilating two patients from a single ventilator
