1,827 research outputs found
Cell Detection with Star-convex Polygons
Automatic detection and segmentation of cells and nuclei in microscopy images
is important for many biological applications. Recent successful learning-based
approaches include per-pixel cell segmentation with subsequent pixel grouping,
or localization of bounding boxes with subsequent shape refinement. In
situations of crowded cells, these can be prone to segmentation errors, such as
falsely merging bordering cells or suppressing valid cell instances due to the
poor approximation with bounding boxes. To overcome these issues, we propose to
localize cell nuclei via star-convex polygons, which are a much better shape
representation as compared to bounding boxes and thus do not need shape
refinement. To that end, we train a convolutional neural network that predicts
for every pixel a polygon for the cell instance at that position. We
demonstrate the merits of our approach on two synthetic datasets and one
challenging dataset of diverse fluorescence microscopy images.Comment: Conference paper at MICCAI 201
Kinetic and Dynamic Delaunay tetrahedralizations in three dimensions
We describe the implementation of algorithms to construct and maintain
three-dimensional dynamic Delaunay triangulations with kinetic vertices using a
three-simplex data structure. The code is capable of constructing the geometric
dual, the Voronoi or Dirichlet tessellation. Initially, a given list of points
is triangulated. Time evolution of the triangulation is not only governed by
kinetic vertices but also by a changing number of vertices. We use
three-dimensional simplex flip algorithms, a stochastic visibility walk
algorithm for point location and in addition, we propose a new simple method of
deleting vertices from an existing three-dimensional Delaunay triangulation
while maintaining the Delaunay property. The dual Dirichlet tessellation can be
used to solve differential equations on an irregular grid, to define partitions
in cell tissue simulations, for collision detection etc.Comment: 29 pg (preprint), 12 figures, 1 table Title changed (mainly
nomenclature), referee suggestions included, typos corrected, bibliography
update
Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
Instance segmentation has witnessed promising advancements through deep
neural network-based algorithms. However, these models often exhibit incorrect
predictions with unwarranted confidence levels. Consequently, evaluating
prediction uncertainty becomes critical for informed decision-making. Existing
methods primarily focus on quantifying uncertainty in classification or
regression tasks, lacking emphasis on instance segmentation. Our research
addresses the challenge of estimating spatial certainty associated with the
location of instances with star-convex shapes. Two distinct clustering
approaches are evaluated which compute spatial and fractional certainty per
instance employing samples by the Monte-Carlo Dropout or Deep Ensemble
technique. Our study demonstrates that combining spatial and fractional
certainty scores yields improved calibrated estimation over individual
certainty scores. Notably, our experimental results show that the Deep Ensemble
technique alongside our novel radial clustering approach proves to be an
effective strategy. Our findings emphasize the significance of evaluating the
calibration of estimated certainties for model reliability and decision-making
Minkowski Sum Construction and other Applications of Arrangements of Geodesic Arcs on the Sphere
We present two exact implementations of efficient output-sensitive algorithms
that compute Minkowski sums of two convex polyhedra in 3D. We do not assume
general position. Namely, we handle degenerate input, and produce exact
results. We provide a tight bound on the exact maximum complexity of Minkowski
sums of polytopes in 3D in terms of the number of facets of the summand
polytopes. The algorithms employ variants of a data structure that represents
arrangements embedded on two-dimensional parametric surfaces in 3D, and they
make use of many operations applied to arrangements in these representations.
We have developed software components that support the arrangement
data-structure variants and the operations applied to them. These software
components are generic, as they can be instantiated with any number type.
However, our algorithms require only (exact) rational arithmetic. These
software components together with exact rational-arithmetic enable a robust,
efficient, and elegant implementation of the Minkowski-sum constructions and
the related applications. These software components are provided through a
package of the Computational Geometry Algorithm Library (CGAL) called
Arrangement_on_surface_2. We also present exact implementations of other
applications that exploit arrangements of arcs of great circles embedded on the
sphere. We use them as basic blocks in an exact implementation of an efficient
algorithm that partitions an assembly of polyhedra in 3D with two hands using
infinite translations. This application distinctly shows the importance of
exact computation, as imprecise computation might result with dismissal of
valid partitioning-motions.Comment: A Ph.D. thesis carried out at the Tel-Aviv university. 134 pages
long. The advisor was Prof. Dan Halperi
Algorithms for fat objects : decompositions and applications
Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal
An integer representation for periodic tilings of the plane by regular polygons
We describe a representation for periodic tilings of the plane by regular polygons. Our
approach is to represent explicitly a small subset of seed vertices from which we systematically generate all elements of the tiling by translations. We represent a tiling concretely
by a (2+n)Ă—4 integer matrix containing lattice coordinates for two translation vectors
and n seed vertices. We discuss several properties of this representation and describe
how to exploit the representation elegantly and efficiently for reconstruction, rendering,
and automatic crystallographic classification by symmetry detection
Moderate-depth benthic habitats of St. John, U.S. Virgin Islands
The National Oceanic and Atmospheric Administration’s (NOAA) Center for Coastal Monitoring and Assessment’s (CCMA) Biogeography Branch and the U.S. National Park Service (NPS) have completed mapping the moderate-depth marine environment south of St. John. This work is an expansion of ongoing mapping and monitoring efforts conducted by NOAA and NPS in the U.S. Caribbean. The standardized protocols used in this effort will enable scientists and managers to quantitatively compare moderate-depth coral reef ecosystems around St. John to those throughout the U.S. Territories. These protocols and products will also help support the effective management and conservation of the marine resources within the National Park system
- …