106,716 research outputs found
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
RGB-D datasets using microsoft kinect or similar sensors: a survey
RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms
FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis
Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci) to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ
GMES-service for assessing and monitoring subsidence hazards in coastal lowland areas around Europe. SubCoast D3.5.1
This document is version two of the user requirements for SubCoast work package 3.5, it is
SubCoast deliverable 3.5.1. Work package 3.5 aims to provide a European integrated GIS
product on subsidence and relative sea level rise. The first step of this process was to
contact the European Environment Agency as the main user to discover their user
requirements.
This document presents these requirments, the outline methodology that will be used to carry
out the integration and the datasets that will be used. In outline the main user requirements
of the EEA are:
1. Gridded approach using an Inspire compliant grid
2. The grid would hold data on:
a. Likely rate of subsidence
b. RSLR
c. Impact (Vulnerability)
d. Certainty (confidence map)
e. Contribution of ground motion to RSLR
f. A measure of certainty in the data provided
g. Metadata
3. Spatial Coverage - Ideally entire coastline of all 37 member states
a. Spatial resolution - 1km
4. Provide a measure of the degree of contribution of ground motion to RSLR
The European integration will be based around a GIS methodology. Datasets will be
integrated and interpreted to provide information on data vlues above. The main value being
a likelyhood of Subsidence. This product will initially be developed at it’s lowest level of detail
for the London area. BGS have a wealth of data for london this will enable this less detialed
product to be validated and also enable the generation of a more detailed product usig the
best data availible. One the methodology has been developed it will be pushed out to other
areas of the ewuropean coastline.
The initial input data that have been reviewed for their suitability for the European integration
are listed below. Thesea re the datasets that have European wide availibility, It is expected
that more detailed datasets will be used in areas where they are avaiilble.
1. Terrafirma Data
2. One Geology
3. One Geology Europe
4. Population Density (Geoland2)
5. The Urban Atlas (Geoland2)
6. Elevation Data
a. SRTM
b. GDEM
c. GTOPO 30
d. NextMap Europe
7. MyOceans Sea Level Data
8. Storm Surge Locations
9. European Environment Agencya.
Elevation breakdown 1km
b. Corine Land Cover 2000 (CLC2000) coastline
c. Sediment Discharges
d. Shoreline
e. Maritime Boundaries
f. Hydrodynamics and Sea Level Rise
g. Geomorphology, Geology, Erosion Trends and Coastal Defence Works
h. Corine land cover 1990
i. Five metre elevation contour line
10. FutureCoas
- …