453 research outputs found
Shallow Water Bathymetry Mapping from UAV Imagery based on Machine Learning
The determination of accurate bathymetric information is a key element for
near offshore activities, hydrological studies such as coastal engineering
applications, sedimentary processes, hydrographic surveying as well as
archaeological mapping and biological research. UAV imagery processed with
Structure from Motion (SfM) and Multi View Stereo (MVS) techniques can provide
a low-cost alternative to established shallow seabed mapping techniques
offering as well the important visual information. Nevertheless, water
refraction poses significant challenges on depth determination. Till now, this
problem has been addressed through customized image-based refraction correction
algorithms or by modifying the collinearity equation. In this paper, in order
to overcome the water refraction errors, we employ machine learning tools that
are able to learn the systematic underestimation of the estimated depths. In
the proposed approach, based on known depth observations from bathymetric LiDAR
surveys, an SVR model was developed able to estimate more accurately the real
depths of point clouds derived from SfM-MVS procedures. Experimental results
over two test sites along with the performed quantitative validation indicated
the high potential of the developed approach.Comment: 8 pages, 9 figure
Recommended from our members
Constructions of teaching in an elite university: A case study
A case study research was conducted to identify constructs of undergraduate teaching in an elite, research-intensive university. Qualitative data collection and analysis involving transcripts from over 40 semi-structured interviews with heads of teaching committees from each department or faculty of the university as well as heads of several other committees and other key stakeholders was carried out. In the analysis, use was made too of relevant archival materials, publicly available data from the university, and governmental reports or documents.
The university offers courses to its undergraduate students that range from interdisciplinary to very discipline-specific and operates a system of personalised tuitions that is at the heart, and thereby defines, excellent teaching. The high quality of its students, who are attracted by the research renown of the university, is regarded as the trademark of the elite institution. Culturally, the sense-making that supports the procedures and structures of the university is based on the assumption that excellent teachers are intrinsically associated with excellent research. Consequently, teaching excellence is recognised but is less well rewarded or acknowledged as compared to research. Excellence in teaching is further constrained by organisation-wide arrangements in academic staff promotion that favour research.
Operating in a super-complex contemporary higher education landscape, this elite university projects a "mirror-image" of itself both externally and internally, the mirror image itself being justified by the ongoing undergraduate achievements and application rates. Great reliance is placed on external examiners to monitor the high standards of achievement, the effect of which is to stifle collegiality about teaching. Institutional governance structures and procedures enable the organisation to operate a cybernetic (selfcorrecting) model of organisational control, where change is perceived as adjustments: incremental and subtle
Fast Algorithms for Energy Games in Special Cases
In this paper, we study algorithms for special cases of energy games, a class
of turn-based games on graphs that show up in the quantitative analysis of
reactive systems. In an energy game, the vertices of a weighted directed graph
belong either to Alice or to Bob. A token is moved to a next vertex by the
player controlling its current location, and its energy is changed by the
weight of the edge. Given a fixed starting vertex and initial energy, Alice
wins the game if the energy of the token remains nonnegative at every moment.
If the energy goes below zero at some point, then Bob wins. The problem of
determining the winner in an energy game lies in . It is a long standing open problem whether a polynomial time
algorithm for this problem exists.
We devise new algorithms for three special cases of the problem. The first
two results focus on the single-player version, where either Alice or Bob
controls the whole game graph. We develop an
time algorithm for a game graph controlled by Alice, by providing a reduction
to the All-Pairs Nonnegative Prefix Paths problem (APNP), where is the
maximum weight and is the best exponent for matrix multiplication.
Thus we study the APNP problem separately, for which we develop an
time algorithm. For both problems, we improve
over the state of the art of for small . For the APNP
problem, we also provide a conditional lower bound, which states that there is
no time algorithm for any , unless the APSP
Hypothesis fails. For a game graph controlled by Bob, we obtain a near-linear
time algorithm. Regarding our third result, we present a variant of the value
iteration algorithm, and we prove that it gives an time algorithm for
game graphs without negative cycles
Computing Smallest Convex Intersecting Polygons
Funding Information: Funding Mark de Berg is supported by the Dutch Research Council (NWO) through Gravitation-grant NETWORKS-024.002.003. Antonis Skarlatos: Part of the work was done during an internship at the Max Planck Institute for Informatics in Saarbrücken, Germany. Publisher Copyright: © 2022 Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. All rights reserved.A polygon C is an intersecting polygon for a set O of objects in R2 if C intersects each object in O, where the polygon includes its interior. We study the problem of computing the minimum-perimeter intersecting polygon and the minimum-area convex intersecting polygon for a given set O of objects. We present an FPTAS for both problems for the case where O is a set of possibly intersecting convex polygons in the plane of total complexity n. Furthermore, we present an exact polynomial-time algorithm for the minimum-perimeter intersecting polygon for the case where O is a set of n possibly intersecting segments in the plane. So far, polynomial-time exact algorithms were only known for the minimum perimeter intersecting polygon of lines or of disjoint segments.Peer reviewe
MODELLING COLOUR ABSORPTION OF UNDERWATER IMAGES USING SFM-MVS GENERATED DEPTH MAPS
Abstract. The problem of colour correction of underwater images concerns not only surveyors, who primarily use images for photogrammetric purposes, but also archaeologists, marine biologists, and many other domains experts whose aim is to study objects and lifeforms underwater. Different methods exist in the literature; some of them provide outstanding results but works involving physical models that take into account additional information and variables (light conditions, depths, camera to objects distances, water properties) that are not always available or can be measured using expensive equipment or calculated using more complicated models. Some other methods have the advantages of working with basically all kinds of dataset, but without considering any geometric information, therefore applying corrections that work only in very generic conditions that most of the time differs from the real-world applications.This paper presents an easy and fast method for restoring the colour information on images captured underwater. The compelling idea is to model light backscattering and absorption variation according to the distance of the surveyed object. This information is always obtainable in photogrammetric datasets, as the model utilises the scene's 3D geometry by creating and using SfM-MVS generated depth maps, which are crucial for implementing the proposed methodology. The results presented visually and quantitatively are promising since they are an excellent compromise to provide a straightforward and easily adaptable workflow to restore the colour information in underwater images
Opportunistic power reassignment between processor and memory in 3D stacks
The pin count largely determines the cost of a chip package, which is often comparable to the cost of a die. In 3D processor-memory designs, power and ground (P/G) pins can account for the majority of the pins. This is because packages include separate pins for the disjoint processor and memory power delivery networks (PDNs). Supporting separate PDNs and P/G pins for processor and memory is inefficient, as each set has to be provisioned for the worst-case power delivery requirements.
In this thesis, we propose to reduce the number of P/G pins of both processor and memory in a 3D design, and dynamically and opportunistically divert some power between the two PDNs on demand. To perform the power transfer, we use a small bidirectional on-chip voltage regulator that connects the two PDNs. Our concept, called Snatch, is effective. It allows the computer to execute code sections with high processor or memory power requirements without having to throttle performance. We evaluate Snatch with simulations of an 8-core multicore stacked with two memory dies. In a set of compute-intensive codes, the processor snatches memory power for 30% of the time on average, speeding-up the codes by up to 23% over advanced turbo-boosting; in memory-intensive codes, the memory snatches processor power. Alternatively, Snatch can reduce the package cost by about 30%
Dynamic algorithms for k-center on graphs
In this paper we give the first efficient algorithms for the -center
problem on dynamic graphs undergoing edge updates. In this problem, the goal is
to partition the input into sets by choosing centers such that the
maximum distance from any data point to the closest center is minimized. It is
known that it is NP-hard to get a better than approximation for this
problem.
While in many applications the input may naturally be modeled as a graph, all
prior works on -center problem in dynamic settings are on metrics. In this
paper, we give a deterministic decremental -approximation
algorithm and a randomized incremental -approximation algorithm,
both with amortized update time for weighted graphs. Moreover, we
show a reduction that leads to a fully dynamic -approximation
algorithm for the -center problem, with worst-case update time that is
within a factor of the state-of-the-art upper bound for maintaining
-approximate single-source distances in graphs. Matching this
bound is a natural goalpost because the approximate distances of each vertex to
its center can be used to maintain a -approximation of the graph
diameter and the fastest known algorithms for such a diameter approximation
also rely on maintaining approximate single-source distances
- …