570 research outputs found
Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations
One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology
Neural Machine Translation Inspired Binary Code Similarity Comparison beyond Function Pairs
Binary code analysis allows analyzing binary code without having access to
the corresponding source code. A binary, after disassembly, is expressed in an
assembly language. This inspires us to approach binary analysis by leveraging
ideas and techniques from Natural Language Processing (NLP), a rich area
focused on processing text of various natural languages. We notice that binary
code analysis and NLP share a lot of analogical topics, such as semantics
extraction, summarization, and classification. This work utilizes these ideas
to address two important code similarity comparison problems. (I) Given a pair
of basic blocks for different instruction set architectures (ISAs), determining
whether their semantics is similar or not; and (II) given a piece of code of
interest, determining if it is contained in another piece of assembly code for
a different ISA. The solutions to these two problems have many applications,
such as cross-architecture vulnerability discovery and code plagiarism
detection. We implement a prototype system INNEREYE and perform a comprehensive
evaluation. A comparison between our approach and existing approaches to
Problem I shows that our system outperforms them in terms of accuracy,
efficiency and scalability. And the case studies utilizing the system
demonstrate that our solution to Problem II is effective. Moreover, this
research showcases how to apply ideas and techniques from NLP to large-scale
binary code analysis.Comment: Accepted by Network and Distributed Systems Security (NDSS) Symposium
201
Multi-Step Processing of Spatial Joins
Spatial joins are one of the most important operations for combining spatial objects of several relations. In this paper, spatial join processing is studied in detail for extended spatial objects in twodimensional data space. We present an approach for spatial join processing that is based on three steps. First, a spatial join is performed on the minimum bounding rectangles of the objects returning a set of candidates. Various approaches for accelerating this step of join processing have been examined at the last year’s conference [BKS 93a]. In this paper, we focus on the problem how to compute the answers from the set of candidates which is handled by
the following two steps. First of all, sophisticated approximations
are used to identify answers as well as to filter out false hits from
the set of candidates. For this purpose, we investigate various types
of conservative and progressive approximations. In the last step, the
exact geometry of the remaining candidates has to be tested against
the join predicate. The time required for computing spatial join
predicates can essentially be reduced when objects are adequately
organized in main memory. In our approach, objects are first decomposed
into simple components which are exclusively organized
by a main-memory resident spatial data structure. Overall, we
present a complete approach of spatial join processing on complex
spatial objects. The performance of the individual steps of our approach
is evaluated with data sets from real cartographic applications.
The results show that our approach reduces the total execution
time of the spatial join by factors
Segment Visibility Counting Queries in Polygons
Let be a simple polygon with vertices, and let be a set of
points or line segments inside . We develop data structures that can
efficiently count the number of objects from that are visible to a query
point or a query segment. Our main aim is to obtain fast,
), query times, while using as little space as
possible. In case the query is a single point, a simple
visibility-polygon-based solution achieves query time using
space. In case also contains only points, we present a smaller,
-space, data structure based on a
hierarchical decomposition of the polygon. Building on these results, we tackle
the case where the query is a line segment and contains only points. The
main complication here is that the segment may intersect multiple regions of
the polygon decomposition, and that a point may see multiple such pieces.
Despite these issues, we show how to achieve query time
using only space. Finally, we show that we can
even handle the case where the objects in are segments with the same
bounds.Comment: 27 pages, 13 figure
Efficient motion planning using generalized penetration depth computation
Motion planning is a fundamental problem in robotics and also arises in other applications including virtual prototyping, navigation, animation and computational structural biology. It has been extensively studied for more than three decades, though most practical algorithms are based on randomized sampling. In this dissertation, we address two main issues that arise with respect to these algorithms: (1) there are no good practical approaches to check for path non-existence even for low degree-of-freedom (DOF) robots; (2) the performance of sampling-based planners can degrade if the free space of a robot has narrow passages. In order to develop effective algorithms to deal with these problems, we use the concept of penetration depth (PD) computation. By quantifying the extent of the intersection between overlapping models (e.g. a robot and an obstacle), PD can provide a distance measure for the configuration space obstacle (C-obstacle). We extend the prior notion of translational PD to generalized PD, which takes into account translational as well as rotational motion to separate two overlapping models. Moreover, we formulate generalized PD computation based on appropriate model-dependent metrics and present two algorithms based on convex decomposition and local optimization. We highlight the efficiency and robustness of our PD algorithms on many complex 3D models. Based on generalized PD computation, we present the first set of practical algorithms for low DOF complete motion planning. Moreover, we use generalized PD computation to develop a retraction-based planner to effectively generate samples in narrow passages for rigid robots. The effectiveness of the resulting planner is shown by alpha puzzle benchmark and part disassembly benchmarks in virtual prototyping
Solving -SUM using few linear queries
The -SUM problem is given input real numbers to determine whether any
of them sum to zero. The problem is of tremendous importance in the
emerging field of complexity theory within , and it is in particular open
whether it admits an algorithm of complexity with . Inspired by an algorithm due to Meiser (1993), we show
that there exist linear decision trees and algebraic computation trees of depth
solving -SUM. Furthermore, we show that there exists a
randomized algorithm that runs in
time, and performs linear queries on the input. Thus, we show
that it is possible to have an algorithm with a runtime almost identical (up to
the ) to the best known algorithm but for the first time also with the
number of queries on the input a polynomial that is independent of . The
bound on the number of linear queries is also a tighter bound
than any known algorithm solving -SUM, even allowing unlimited total time
outside of the queries. By simultaneously achieving few queries to the input
without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we
deepen the understanding of this canonical problem which is a cornerstone of
complexity-within-.
We also consider a range of tradeoffs between the number of terms involved in
the queries and the depth of the decision tree. In particular, we prove that
there exist -linear decision trees of depth
Threesomes, Degenerates, and Love Triangles
The 3SUM problem is to decide, given a set of real numbers, whether any
three sum to zero. It is widely conjectured that a trivial -time
algorithm is optimal and over the years the consequences of this conjecture
have been revealed. This 3SUM conjecture implies lower bounds on
numerous problems in computational geometry and a variant of the conjecture
implies strong lower bounds on triangle enumeration, dynamic graph algorithms,
and string matching data structures.
In this paper we refute the 3SUM conjecture. We prove that the decision tree
complexity of 3SUM is and give two subquadratic 3SUM
algorithms, a deterministic one running in
time and a randomized one running in time with
high probability. Our results lead directly to improved bounds for -variate
linear degeneracy testing for all odd . The problem is to decide, given
a linear function and a set , whether . We show the
decision tree complexity of this problem is .
Finally, we give a subcubic algorithm for a generalization of the
-product over real-valued matrices and apply it to the problem of
finding zero-weight triangles in weighted graphs. We give a
depth- decision tree for this problem, as well as an
algorithm running in time
- …