24 research outputs found

    RELLISUR: A Real Low-Light Image Super-Resolution Dataset

    No full text
    The RELLISUR dataset contains real low-light low-resolution images paired with normal-light high-resolution reference image counterparts. This dataset aims to fill the gap between low-light image enhancement and low-resolution image enhancement (Super-Resolution (SR)) which is currently only being addressed separately in the literature, even though the visibility of real-world images is often limited by both low-light and low-resolution. The dataset contains 12750 paired images of different resolutions and degrees of low-light illumination, to facilitate learning of deep-learning based models that can perform a direct mapping from degraded images with low visibility to high-quality detail rich images of high resolution

    Spatially Variant Super-Resolution (SVSR) benchmarking dataset

    No full text
    The Spatially Variant Super-Resolution (SVSR) benchmarking dataset contains 1119 low-resolution images that are degraded by complex noise of varying intensity and type and their corresponding noise free X2 and X4 high-resolution counterparts, for evaluation of the robustness of real-world super-resolution methods. Additionally, the dataset is also suitable for evaluation of denoisers

    Benchmark movement data set for trust assessment in human robot collaboration

    No full text
    In the Drapebot project, a worker is supposed to collaborate with a large industrial manipulator in two tasks: collaborative transport of carbon fibre patches and collaborative draping. To realize data-driven trust assessement, the worker is equipped with a motion tracking suit and the body movement data is labeled with the trust scores from a standard Trust questionnaire

    Benchmark EEG data set for trust assessment for interactions with social robots

    No full text
    The data collection consisted of a game interaction with a small humanoid EZ-robot. The robot explains a word to the participant either through movements depicting the concept or by verbal description. Depending on their performance, participants could "earn" or loose candy as remuneration for their participation. The dataset comprises EEG (Electroencephalography) recordings from 21 participants, gathered using Emotiv headsets. Each participant's EEG data includes timestamps and measurements from 14 sensors placed across different regions of the scalp. The sensor labels in the header are as follows: EEG.AF3, EEG.F7, EEG.F3, EEG.FC5, EEG.T7, EEG.P7, EEG.O1, EEG.O2, EEG.P8, EEG.T8, EEG.FC6, EEG.F4, EEG.F8, EEG.AF4, and Time

    A framework for interactive human–robot design exploration

    No full text
    This study seeks to identify key aspects for increased integration of interactive robotics within the creative design process. Through its character as foundational research, the study aims to contribute to the advancement of new explorative design methods to support architects in their exploration of fabrication and assembly of an integrated performance-driven architecture. The article describes and investigates a proposed design framework for supporting an interactive human–robot design process. The proposed framework is examined through a 3-week architectural studio, with university master students exploring the design of a brick construction with the support of an interactive robotic platform. Evaluation of the proposed framework was done by triangulation of the authors’ qualitative user observations, quantitative logging of the students’ individual design processes, and through questionnaires completed after finishing the studies. The result suggests that interactive human–robot fabrication is a relevant mode of design with positive effect on the process of creative design exploration

    AAU RainSnow Traffic Surveillance Dataset

    No full text
    Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects.Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects

    AAU VAP Trimodal People Segmentation Dataset

    No full text
    Context How do you design a computer vision algorithm that is able to detect and segment people when they are captured by a visible light camera, a thermal infrared camera, and a depth sensor? And how do you fuse the three inherently different data streams such that you can reliably transfer features from one modality to another? Feel free to download our dataset and try it out yourselves! Content The dataset features a total of 5724 annotated frames divided in three indoor scenes. Activity in scene 1 and 3 is using the full depth range of the Kinect for XBOX 360 sensor whereas activity in scene 2 is constrained to a depth range of plus/minus 0.250 m in order to suppress the parallax between the two physical sensors. Scene 1 and 2 are situated in a closed meeting room with little natural light to disturb the depth sensing, whereas scene 3 is situated in an area with wide windows and a substantial amount of sunlight. For each scene, a total of three persons are interacting, reading, walking, sitting, reading, etc. Every person is annotated with a unique ID in the scene on a pixel-level in the RGB modality. For the thermal and depth modalities, annotations are transferred from the RGB images using a registration algorithm found in registrator.cpp. We have used our AAU VAP Multimodal Pixel Annotator to create the ground-truth, pixel-based masks for all three modalities

    Sewer-ML

    No full text
    Sewer-ML is a sewer defect dataset. It contains 1.3 million images, from 75,618 videos collected from three Danish water utility companies over nine years. All videos have been annotated by licensed sewer inspectors following the Danish sewer inspection standard, Fotomanualen. This leads to consistent and reliable annotations, and a total of 17 annotated defect classes
    corecore