364 research outputs found
Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity
In this paper we present a scalable approach for robustly computing a 3D
surface mesh from multi-scale multi-view stereo point clouds that can handle
extreme jumps of point density (in our experiments three orders of magnitude).
The backbone of our approach is a combination of octree data partitioning,
local Delaunay tetrahedralization and graph cut optimization. Graph cut
optimization is used twice, once to extract surface hypotheses from local
Delaunay tetrahedralizations and once to merge overlapping surface hypotheses
even when the local tetrahedralizations do not share the same topology.This
formulation allows us to obtain a constant memory consumption per sub-problem
while at the same time retaining the density independent interpolation
properties of the Delaunay-based optimization. On multiple public datasets, we
demonstrate that our approach is highly competitive with the state-of-the-art
in terms of accuracy, completeness and outlier resilience. Further, we
demonstrate the multi-scale potential of our approach by processing a newly
recorded dataset with 2 billion points and a point density variation of more
than four orders of magnitude - requiring less than 9GB of RAM per process.Comment: This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE
(ieee.org). The official version of the paper will be made available on IEEE
Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the
supplementary material, which will not appear IEEE Xplore (R
Investigating Mechanisms of Hydraulic Conductivity Transience in Sandy Streambeds
Streambed hydraulic conductivity (K) is known to be spatially and temporally heterogeneous, but few attempts to understand the controls on temporal variability have been made. This study documents temporal K transience and demonstrates how hydraulic, geophysical, and sedimentological methods can be combined to understand the processes that give rise to changes in streambed K. Falling head permeameter tests and slug tests were conducted to determine vertical K (Kv) and K (slug test K), respectively. These tests were repeated three times over a twelve-week period on the same grid at a depth of 0.5 meters below the bed of the Loup River in east-central Nebraska during the summer of 2017. This grid included (1) a stationary braid bar where diagenetic pore clogging is expected to control K transience, and (2) mobile sediments of the adjacent stream channel where deposition and erosion are thought to be the dominant controls. Sediment samples were collected at the site of each hydraulic test to determine grain size distributions and estimate K. Ground penetrating radar surveys at 450 MHz and frequency domain electromagnetic geophysical surveys provided high resolution images of subsurface structure. Kv ranges between 0.1 and 45 meters/day, and K ranges between 15 and 55 meters/day. Kv and K changed significantly only between the second and third sampling events. K declined 14-20% in both environments while Kv declined 27% on the bar, but was unchanged in the channel. Despite evidence of scour and fill in the channel captured by GPR, deposition and erosion did not exert a dominant influence on K transience. The results of this study suggest that processes other than physical sediment transport, such as bioclogging or gas ebullition, were responsible for the decrease in K.
Advisor: Jesse T. Koru
Using Self-Contradiction to Learn Confidence Measures in Stereo Vision
Learned confidence measures gain increasing importance for outlier removal
and quality improvement in stereo vision. However, acquiring the necessary
training data is typically a tedious and time consuming task that involves
manual interaction, active sensing devices and/or synthetic scenes. To overcome
this problem, we propose a new, flexible, and scalable way for generating
training data that only requires a set of stereo images as input. The key idea
of our approach is to use different view points for reasoning about
contradictions and consistencies between multiple depth maps generated with the
same stereo algorithm. This enables us to generate a huge amount of training
data in a fully automated manner. Among other experiments, we demonstrate the
potential of our approach by boosting the performance of three learned
confidence measures on the KITTI2012 dataset by simply training them on a vast
amount of automatically generated training data rather than a limited amount of
laser ground truth data.Comment: This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016. The copyright was transfered to IEEE
(https://www.ieee.org). The official version of the paper will be made
available on IEEE Xplore (R) (http://ieeexplore.ieee.org). This version of
the paper also contains the supplementary material, which will not appear
IEEE Xplore (R
The Non-Decarbonization Puzzle in Brazilian Energy Policy
The Brazilian economy is not decarbonizing and current policies are highly
unlikely to change this. Expanding and diversifying the supply of renewable
energy would improve price stability and enhance energy supply and access. Why
do Brazilian governments adopt policy objectives and instruments which forego
the significant benefits available from ambitious decarbonization objectives,
and how can we explain differences across sectors? We analyze objectives and
instruments in hydropower, transport fuels, solarand wind energy. With the
exception of hydropower, we find that the principle barrier to decarbonization
are policy inconsistencies. In solar and, to a lesser extent, wind energy,
national content requirements, a lack of R&D; subsidies for building up
domestic manufacturing capacities as well as the design of electricity
auctions have stymied expansion. In transport fuels, the combination of
inconsistent fiscal incentives and a price cap on gasoline have weakened the
bioethanol sector in recent years. Emissions from the energy system are on a
long-term upwards trajectory, present policies also limit Brazil’s ability to
contribute to global mitigation efforts
Brazil and the Durban Platform: ambitions and expectations
Brazil, together with other emerging powers, has repeatedly made headlines over the last few years as a serious player in international climate change negotiations. In December 2015 states will convene at the UN Climate Change Conference in Paris to agree on a new international climate treaty. What can we expect from Brazil at the upcoming climate summit? What can we expect from the negotiations on a new climate treaty in the context of the Durban Platform? This issue of the GIGA Focus discusses Brazil's potential role at the upcoming UN Climate Change Conference, analysing if Brazil's expected contributions can keep up with its ambitious rhetoric. Brazil's presently low emissions trajectory is a result of reduced deforestation rates. With greenhouse gas emissions from all other sources increasing, an ambitious contribution to global post-2020 mitigation requires more stringent action. However, it is unlikely that Brazil will take ambitious measures in areas other than forestry. While Brazilian climate diplomacy puts a rhetorical premium on historical responsibility, its substantive contribution to the negotiation process is only moderately progressive. The proposal of "concentric Differentiation" offers a way to implement the principle of common but differentiated responsibility in line with current realities while allowing for the obligations of Annex I (mostly developed countries) and major non-Annex I parties (mostly developing countries) to converge in the long term. The present context of the international negotiations is generally favourable towards Brazilian participation. The main challenge will be to conclude a transparency regime which facilitates collective action by allowing for adequate international review of domestic policies. To that end, the principle of common but differentiated responsibility should be implemented under the Paris agreement in a manner which aligns with the convention's long-term objective
Map-Repair: Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images
In the fast developing countries it is hard to trace new buildings
construction or old structures destruction and, as a result, to keep the
up-to-date cadastre maps. Moreover, due to the complexity of urban regions or
inconsistency of data used for cadastre maps extraction, the errors in form of
misalignment is a common problem. In this work, we propose an end-to-end deep
learning approach which is able to solve inconsistencies between the input
intensity image and the available building footprints by correcting label
noises and, at the same time, misalignments if needed. The obtained results
demonstrate the robustness of the proposed method to even severely misaligned
examples that makes it potentially suitable for real applications, like
OpenStreetMap correction
- …