24 research outputs found

    Justify your alpha

    Get PDF
    Benjamin et al. proposed changing the conventional “statistical significance” threshold (i.e.,the alpha level) from p ≤ .05 to p ≤ .005 for all novel claims with relatively low prior odds. They provided two arguments for why lowering the significance threshold would “immediately improve the reproducibility of scientific research.” First, a p-value near .05provides weak evidence for the alternative hypothesis. Second, under certain assumptions, an alpha of .05 leads to high false positive report probabilities (FPRP2 ; the probability that a significant finding is a false positive

    Benchmark movement data set for trust assessment in human robot collaboration

    No full text
    In the Drapebot project, a worker is supposed to collaborate with a large industrial manipulator in two tasks: collaborative transport of carbon fibre patches and collaborative draping. To realize data-driven trust assessement, the worker is equipped with a motion tracking suit and the body movement data is labeled with the trust scores from a standard Trust questionnaire

    AAU RainSnow Traffic Surveillance Dataset

    No full text
    Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects.Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects

    Multi-view Traffic Intersection Dataset (MTID)

    No full text
    MTID er et datasæt med trafikovervågning. Det indeholder optagelser af ét kryds, der er optaget fra to forskellige perspektiver i samme tidsrum. Trafikanterne er blevet grundigt annoteret til pixel-præcision.The Multi-view Traffic Intersection Dataset (MTID) is a traffic surveillance dataset containing footage of the same intersection from multiple points of view. Traffic in all views has been carefully annotated to pixel-level accuracy

    AAU VAP Trimodal People Segmentation Dataset

    No full text
    Context How do you design a computer vision algorithm that is able to detect and segment people when they are captured by a visible light camera, a thermal infrared camera, and a depth sensor? And how do you fuse the three inherently different data streams such that you can reliably transfer features from one modality to another? Feel free to download our dataset and try it out yourselves! Content The dataset features a total of 5724 annotated frames divided in three indoor scenes. Activity in scene 1 and 3 is using the full depth range of the Kinect for XBOX 360 sensor whereas activity in scene 2 is constrained to a depth range of plus/minus 0.250 m in order to suppress the parallax between the two physical sensors. Scene 1 and 2 are situated in a closed meeting room with little natural light to disturb the depth sensing, whereas scene 3 is situated in an area with wide windows and a substantial amount of sunlight. For each scene, a total of three persons are interacting, reading, walking, sitting, reading, etc. Every person is annotated with a unique ID in the scene on a pixel-level in the RGB modality. For the thermal and depth modalities, annotations are transferred from the RGB images using a registration algorithm found in registrator.cpp. We have used our AAU VAP Multimodal Pixel Annotator to create the ground-truth, pixel-based masks for all three modalities

    Sewer-ML

    No full text
    Sewer-ML is a sewer defect dataset. It contains 1.3 million images, from 75,618 videos collected from three Danish water utility companies over nine years. All videos have been annotated by licensed sewer inspectors following the Danish sewer inspection standard, Fotomanualen. This leads to consistent and reliable annotations, and a total of 17 annotated defect classes

    RELLISUR: A Real Low-Light Image Super-Resolution Dataset

    No full text
    The RELLISUR dataset contains real low-light low-resolution images paired with normal-light high-resolution reference image counterparts. This dataset aims to fill the gap between low-light image enhancement and low-resolution image enhancement (Super-Resolution (SR)) which is currently only being addressed separately in the literature, even though the visibility of real-world images is often limited by both low-light and low-resolution. The dataset contains 12750 paired images of different resolutions and degrees of low-light illumination, to facilitate learning of deep-learning based models that can perform a direct mapping from degraded images with low visibility to high-quality detail rich images of high resolution
    corecore