176 research outputs found
Fast Approximate Convex Decomposition
Approximate convex decomposition (ACD) is a technique that partitions an input object into "approximately convex" components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n_c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n_c + 1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods given in the Princeton Shape Benchmark
Approximate convex decomposition and its applications
Geometric computations are essential in many real-world problems. One important
issue in geometric computations is that the geometric models in these problems
can be so large that computations on them have infeasible storage or computation
time requirements. Decomposition is a technique commonly used to partition complex
models into simpler components. Whereas decomposition into convex components results
in pieces that are easy to process, such decompositions can be costly to construct
and can result in representations with an unmanageable number of components. In
this work, we have developed an approximate technique, called Approximate Convex
Decomposition (ACD), which decomposes a given polygon or polyhedron into "approximately
convex" pieces that may provide similar benefits as convex components,
while the resulting decomposition is both significantly smaller (typically by orders of
magnitude) and can be computed more efficently. Indeed, for many applications, an
ACD can represent the important structural features of the model more accurately
by providing a mechanism for ignoring less significant features, such as wrinkles and
surface texture. Our study of a wide range of applications shows that in addition to
providing computational efficiency, ACD also provides natural multi-resolution or hierarchical
representations. In this dissertation, we provide some examples of ACD's
many potential applications, such as particle simulation, mesh generation, motion
planning, and skeleton extraction
Approximate convex decomposition based localization in wireless sensor networks
Accurate localization in wireless sensor networks is the foundation for many applications, such as geographic routing and position-aware data processing. An important research direction for localization is to develop schemes using connectivity information only. These schemes primary apply hop counts to distance estimation. Not surprisingly, they work well only when the network topology has a convex shape. In this paper, we develop a new Localization protocol based on Approximate Convex Decomposition (ACDL). It can calculate the node virtual locations for a large-scale sensor network with arbitrary shapes. The basic idea is to decompose the network into convex subregions. It is not straight-forward, however. We first examine the influential factors on the localization accuracy when the network is concave such as the sharpness of concave angle and the depth of the concave valley. We show that after decomposition, the depth of the concave valley becomes irrelevant. We thus define concavity according to the angle at a concave point, which can reflect the localization error. We then propose ACDL protocol for network localization. It consists of four main steps. First, convex and concave nodes are recognized and network boundaries are segmented. As the sensor network is discrete, we show that it is acceptable to approximately identify the concave nodes to control the localization error. Second, an approximate convex decomposition is conducted. Our convex decomposition requires only local information and we show that it has low message overhead. Third, for each convex subsection of the network, an improved Multi-Dimensional Scaling (MDS) algorithm is proposed to compute a relative location map. Fourth, a fast and low complexity merging algorithm is developed to construct the global location map. Our simulation on several representative networks demonstrated that ACDL has localization error that is 60%-90% smaller as compared with the typical MDS-MAP algorithm and 20%-30% - maller as compared to a recent state-of-the-art localization algorithm CATL.Department of ComputingRefereed conference pape
Approximation Schemes for Partitioning: Convex Decomposition and Surface Approximation
We revisit two NP-hard geometric partitioning problems - convex decomposition
and surface approximation. Building on recent developments in geometric
separators, we present quasi-polynomial time algorithms for these problems with
improved approximation guarantees.Comment: 21 pages, 6 figure
A Computational Model of the Short-Cut Rule for 2D Shape Decomposition
We propose a new 2D shape decomposition method based on the short-cut rule.
The short-cut rule originates from cognition research, and states that the
human visual system prefers to partition an object into parts using the
shortest possible cuts. We propose and implement a computational model for the
short-cut rule and apply it to the problem of shape decomposition. The model we
proposed generates a set of cut hypotheses passing through the points on the
silhouette which represent the negative minima of curvature. We then show that
most part-cut hypotheses can be eliminated by analysis of local properties of
each. Finally, the remaining hypotheses are evaluated in ascending length
order, which guarantees that of any pair of conflicting cuts only the shortest
will be accepted. We demonstrate that, compared with state-of-the-art shape
decomposition methods, the proposed approach achieves decomposition results
which better correspond to human intuition as revealed in psychological
experiments.Comment: 11 page
Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization
In this paper, we propose a general framework for constructing IGA-suitable
planar B-spline parameterizations from given complex CAD boundaries consisting
of a set of B-spline curves. Instead of forming the computational domain by a
simple boundary, planar domains with high genus and more complex boundary
curves are considered. Firstly, some pre-processing operations including
B\'ezier extraction and subdivision are performed on each boundary curve in
order to generate a high-quality planar parameterization; then a robust planar
domain partition framework is proposed to construct high-quality patch-meshing
results with few singularities from the discrete boundary formed by connecting
the end points of the resulting boundary segments. After the topology
information generation of quadrilateral decomposition, the optimal placement of
interior B\'ezier curves corresponding to the interior edges of the
quadrangulation is constructed by a global optimization method to achieve a
patch-partition with high quality. Finally, after the imposition of
C1=G1-continuity constraints on the interface of neighboring B\'ezier patches
with respect to each quad in the quadrangulation, the high-quality B\'ezier
patch parameterization is obtained by a C1-constrained local optimization
method to achieve uniform and orthogonal iso-parametric structures while
keeping the continuity conditions between patches. The efficiency and
robustness of the proposed method are demonstrated by several examples which
are compared to results obtained by the skeleton-based parameterization
approach
Point Pair Feature based Object Detection for Random Bin Picking
Point pair features are a popular representation for free form 3D object
detection and pose estimation. In this paper, their performance in an
industrial random bin picking context is investigated. A new method to generate
representative synthetic datasets is proposed. This allows to investigate the
influence of a high degree of clutter and the presence of self similar
features, which are typical to our application. We provide an overview of
solutions proposed in literature and discuss their strengths and weaknesses. A
simple heuristic method to drastically reduce the computational complexity is
introduced, which results in improved robustness, speed and accuracy compared
to the naive approach
Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning
We present a developmental framework based on a long-term memory and
reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This
architecture allows a robot to optimize autonomously hyper-parameters that need
to be tuned from any action and/or vision module, treated as a black-box. The
learning can take advantage of past experiences (stored in the episodic and
procedural memories) in order to warm-start the exploration using a set of
hyper-parameters previously optimized from objects similar to the new unknown
one (stored in a semantic memory). As example, the system has been used to
optimized 9 continuous hyper-parameters of a professional software (Kamido)
both in simulation and with a real robot (industrial robotic arm Fanuc) with a
total of 13 different objects. The robot is able to find a good object-specific
optimization in 68 (simulation) or 40 (real) trials. In simulation, we
demonstrate the benefit of the transfer learning based on visual similarity, as
opposed to an amnesic learning (i.e. learning from scratch all the time).
Moreover, with the real robot, we show that the method consistently outperforms
the manual optimization from an expert with less than 2 hours of training time
to achieve more than 88% of success
- …